Decoding conditions

Authors: Seyed-Mahdi Khaligh-Razavi, Francois Tadel, Dimitrios Pantazis

This tutorial illustrates how to use the functions developed at Aude Oliva's lab (MIT) to run support vector machine (SVM) and linear discriminant analysis (LDA) classification on your MEG data across time.

License

The decoding tutorial dataset remains a property of Aude Oliva’s Lab, Computer Science and AI (CSAIL), Massachusetts Institute of Technology, Cambridge, US. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from the Lab.

If you reference this dataset in your publications, please acknowledge its authors (Seyed-Mahdi Khaligh-Razavi and Dimitrios Pantazis) and cite Khaligh-Razavi et al. (2015). For questions, please contact us through the Brainstorm forum.

Presentation of the experiment

  • One subject, one acquisition run of 15 minutes.
  • The subject performs an orthogonal image categorization task.
  • During this run, the participant saw:
    • 720 stimuli (360 faces, 360 scenes)
    • Images centrally-presented (6 degree visual angle)
    • Presentation duration of each image: 500ms
  • The subject has to decide for each image (without responding) if:
    • scenes were indoor / outdoor
    • faces were male / female
  • Response: Every 2 to 4 trials (randomly determined), a question mark would appear on the screen for 1 second. At this time, participants were asked to press a button to indicate the category of the last image (male/female, indoor/outdoor), and they were also allowed to blink/swallow. This task was designed to ensure participants were attentive to the images without explicitly encoding them into memory. Participants only responded during the “question mark” trials so that activity during stimulus perception was not contaminated by motor activity.

  • Fixation: During the task, participants were asked to fixate on a cross centered on the screen and their eye movements were tracked, to ensure any results were not due to differential eye movement patterns.

  • Context: The whole experiment was divided into 16 runs. Each run contained 45 randomly selected images (each of them shown twice per run—not in succession), resulting in a total experiment time of about 50 min. The MEG data that is included here for demonstration purposes contains only the first 15 minutes of the whole session, during which 53 faces and 54 scenes have been observed.

  • Ethics: The study was conducted according to the Declaration of Helsinki and approved by the local ethics committee (Institutional Review Board of the Massachusetts Institute of Technology). Informed consent was obtained from all participants.

  • MEG acquisition at 1000Hz with an Elekta-Neuromag Triux system

Description of the decoding functions

Two methods are offered for the classification of MEG recordings across time:

These two processes in very similar ways:

In the context of this tutorial, we have two condition types: faces, and scenes. We want to decode faces vs. scenes using 306 MEG channels. In the data, the faces are named as condition ‘201’; and the scenes are named as condition ‘203’.

Download and installation

Import the recordings



Import MEG data

  1. Set epoch time to [-100, 1000]
  2. Time range: [-100,0]
  3. From the events select only these two events: 201 (faces), and 203(scenes)

    [ATTACH]

  4. Then press import. You will get a message saying ‘some epochs are shorter than the others..’ . Press yes.

Decoding conditions

  1. Select ‘Process2’ from the bottom brainstorm window.
  2. Drag and drop 40 files from folder ‘201’ to ‘Files A’; and 40 files from folder ‘203’ to ‘Files B’. You can select more than 40 or less. The important thing is that both ‘A’ and ‘B’ should have the same number of files.

    [ATTACH]

a) Cross-validation

Cross-validation is a model validation technique for assessing how the results of our decoding analysis will generalize to an independent data set.

  1. Select run -> decoding conditions -> classification with cross validation

    [ATTACH]

    Here, you will have three choices for cross-validation. If you have Matlab Statistics and Machine Learning Toolbox, you can use ‘Matlab SVM’ or ‘Matlab LDA’. You can also install the ‘LibSVM’ toolbox (https://www.csie.ntu.edu.tw/~cjlin/libsvm/ ); and addpath it to your Matlab session. LibSVM may be faster. The LibSVM cross-validation won’t be stratified. However, if you select Matlab SVM/LDA, it will do a k-fold stratified cross-validation for you, meaning that each fold will contain the same proportions of the two types of class labels.

You can also set the number of folds for cross-validation; and the cut-off frequency for low-pass filtering – this is to smooth your data.

9_crossvalres2.png

The internal brainstorm plot is not perfect for this purpose. To make a proper plot you can plot the results yourself. Right click on the result file (Matlab SVM Decoding_201_203), select 'export to Matlab' from the menu. Give it a name ‘decodingcurve’. Then using Matlab plot function you can plot the decoding accuracies across time.


Matlab code:



10_crossvalres3.png

b) Permutation

This is an iterative procedure. The training and test data for the SVM/LDA classifier are selected in each iteration by randomly permuting the samples and grouping them into bins of size n (you can select the trial bin sizes). In each iteration two samples (one from each condition) are left out for test. The rest of the data are used to train the classifier with.

  1. Select run -> decoding conditions -> classification with permutation

  2. Set the values as the following:



11_permoptions.png

If the ‘Trial bin size’ is greater than 1, the training data will be randomly grouped into bins of the size you determine here. The samples within each bin are then averaged (we refer to this as sub-averaging); the classifier is then trained using the averaged samples. For example, if you have 40 faces and 40 scenes, and you set the trial bin size to 5; then for each condition you will have 8 bins each containing 5 samples. Seven bins from each condition will be used for training, and the two left-out bins (one face bin, one scene bin) will be used for testing the classifier performance.



12_permres1.png

This might not be very intuitive. You can export the decoding results into Matlab and plot it yourself. If you export the decoding results into Matlab, the imported structure will have two important fields:

If you plot the mean value (decodingcurve.Value), below is what you will get. You also have access to the standard deviation (decodingcurve.Std), in case you want to plot it.



13_permres2.png

References

  1. Khaligh-Razavi, S-M; Bainbridge, W; Pantazis, D; and Oliva, A. (2015) Introducing Brain Memorability: an MEG study (in preparation).
  2. Cichy, R.M., Pantazis, D., and Oliva, A. (2014). Resolving human object recognition in space and time. Nat Neurosci 17, 455–462.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Decoding (last edited 2015-07-24 15:04:06 by FrancoisTadel)