Decoding conditions

Authors: Seyed-Mahdi Khaligh-Razavi, Francois Tadel, Dimitrios Pantazis

This tutorial illustrates how to use the functions developed at Aude Oliva's lab (MIT) to run support vector machine (SVM) and linear discriminant analysis (LDA) classification on your MEG data across time.

License

The decoding tutorial dataset remains a property of Aude Oliva’s Lab, Computer Science and AI (CSAIL), Massachusetts Institute of Technology, Cambridge, US. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from the Lab.

If you reference this dataset in your publications, please acknowledge its authors (Seyed-Mahdi Khaligh-Razavi and Dimitrios Pantazis) and cite Khaligh-Razavi et al. (2015). For questions, please contact us through the Brainstorm forum.

Presentation of the experiment

  • One subject, one acquisition run of 15 minutes.
  • The subject performs an orthogonal image categorization task.
  • During this run, the participant saw:
    • 720 stimuli (360 faces, 360 scenes)
    • Images centrally-presented (6 degree visual angle)
    • Presentation duration of each image: 500ms
  • The subject has to decide for each image (without responding) if:
    • scenes were indoor / outdoor
    • faces were male / female
  • Response: Every 2 to 4 trials (randomly determined), a question mark would appear on the screen for 1 second. At this time, participants were asked to press a button to indicate the category of the last image (male/female, indoor/outdoor), and they were also allowed to blink/swallow. This task was designed to ensure participants were attentive to the images without explicitly encoding them into memory. Participants only responded during the “question mark” trials so that activity during stimulus perception was not contaminated by motor activity.

  • Fixation: During the task, participants were asked to fixate on a cross centered on the screen and their eye movements were tracked, to ensure any results were not due to differential eye movement patterns.

  • Context: The whole experiment was divided into 16 runs. Each run contained 45 randomly selected images (each of them shown twice per run—not in succession), resulting in a total experiment time of about 50 min. The MEG data that is included here for demonstration purposes contains only the first 15 minutes of the whole session, during which 53 faces and 54 scenes have been observed.

  • Ethics: The study was conducted according to the Declaration of Helsinki and approved by the local ethics committee (Institutional Review Board of the Massachusetts Institute of Technology). Informed consent was obtained from all participants.

  • MEG acquisition at 1000Hz with an Elekta-Neuromag Triux system

Description of the decoding functions

Two decoding processes are available in Brainstorm:

These two processes work in very similar ways:

In the context of this tutorial, we have two condition types: faces, and scenes. We want to decode faces vs. scenes using 306 MEG channels. In the data, the faces are named as condition ‘201’; and the scenes are named as condition ‘203’.

Download and installation

Import the recordings

Select files

Select the Process2 tab at the bottom of the Brainstorm window.

Cross-validation

Cross-validation is a model validation technique for assessing how the results of our decoding analysis will generalize to an independent data set.

9_crossvalres2.png

The internal brainstorm plot is not perfect for this purpose. To make a proper plot you can plot the results yourself. Right click on the result file (Matlab SVM Decoding_201_203), select 'export to Matlab' from the menu. Give it a name ‘decodingcurve’. Then using Matlab plot function you can plot the decoding accuracies across time.


Matlab code:



10_crossvalres3.png

Permutation

This is an iterative procedure. The training and test data for the SVM/LDA classifier are selected in each iteration by randomly permuting the samples and grouping them into bins of size n (you can select the trial bin sizes). In each iteration two samples (one from each condition) are left out for test. The rest of the data are used to train the classifier with.

  1. Select run -> decoding conditions -> classification with permutation

  2. Set the values as the following:



11_permoptions.png

If the ‘Trial bin size’ is greater than 1, the training data will be randomly grouped into bins of the size you determine here. The samples within each bin are then averaged (we refer to this as sub-averaging); the classifier is then trained using the averaged samples. For example, if you have 40 faces and 40 scenes, and you set the trial bin size to 5; then for each condition you will have 8 bins each containing 5 samples. Seven bins from each condition will be used for training, and the two left-out bins (one face bin, one scene bin) will be used for testing the classifier performance.



12_permres1.png

This might not be very intuitive. You can export the decoding results into Matlab and plot it yourself. If you export the decoding results into Matlab, the imported structure will have two important fields:

If you plot the mean value (decodingcurve.Value), below is what you will get. You also have access to the standard deviation (decodingcurve.Std), in case you want to plot it.



13_permres2.png

References

  1. Khaligh-Razavi, S-M; Bainbridge, W; Pantazis, D; and Oliva, A. (2015) Introducing Brain Memorability: an MEG study (in preparation).
  2. Cichy, R.M., Pantazis, D., and Oliva, A. (2014). Resolving human object recognition in space and time. Nat Neurosci 17, 455–462.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Decoding (last edited 2015-07-24 19:28:39 by FrancoisTadel)