Brainstorm
  • Comments
  • Menu
    • Attachments
    • Versions
    • Raw Text
    • Print View
  • Login

Software

  • Introduction

  • Gallery

  • Download

  • Installation

Users

  • Tutorials

  • Forum

  • Courses

  • Community

  • Publications

Development

  • What's new

  • What's next

  • About us

  • Contact us

  • Contribute

Revision 70 as of 2020-04-29 14:22:02
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

Machine learning: Decoding / MVPA

Authors: Dimitrios Pantazis, Seyed-Mahdi Khaligh-Razavi, Francois Tadel,

This tutorial illustrates how to run MEG decoding using support vector machines (SVM).

Contents

  1. License
  2. Description of the decoding functions
  3. Download and installation
  4. Import the recordings
  5. Select files
  6. Decoding with cross-validation
  7. Permutation
  8. Acknowledgment
  9. References
  10. Additional documentation

License

To reference this dataset in your publications, please cite Cichy et al. (2014).

Description of the decoding functions

Two decoding processes are available in Brainstorm:

  • Decoding > SVM classifier decoding

  • Decoding > max-correlation classifier decoding

These two processes work in a similar way, but they use a different classifier, so only SVD is demonstrated here.

  • Input: the input is the channel data from two conditions (e.g. condA and condB) across time. Number of samples per condition do not have to be the same for both condA and condB, but each of them should have enough samples to create k-folds (see parameter below).

  • Output: the output is a decoding time course, or a temporal generalization matrix (train time x test time).

  • Classifier: Two methods are offered for the classification of MEG recordings across time: support vector machine (SVM) and max-correlation classifier.

In the context of this tutorial, we have two condition types: faces, and objects. We want to decode faces vs. objects using 306 MEG channels.

Download and installation

  • From the Download page of this website, download the file: subj04NN_sess01-0_tsss.fif

  • Start Brainstorm (Matlab scripts or stand-alone version).
  • Select the menu File > Create new protocol. Name it "TutorialDecoding" and select the options:

    • "Yes, use protocol's default anatomy",

    • "No, use one channel file per condition".

      1_create_new_protocol.jpg

Import the recordings

  • Go to the "functional data" view (sorted by subjects).
  • Right-click on the TutorialDecoding folder > New subject > Subject01
    Leave the default options you defined for the protocol.

  • Right click on the subject node (Subject01) > Review raw file.
    Select the file format: "MEG/EEG: Neuromag FIFF (*.fif)"
    Select the file: subj04NN-sess01-0_tsss.fif

  • 2_review_raw_file.jpg

    2_review_raw_file2.jpg

  • Select "Event channels" to read the triggers from the stimulus channel.

    3_event_channel.jpg

  • We will not pay attention to MEG/MRI registration because we are not going to compute any source models. The decoding is done on the sensor data.
  • Double click on the 'Link to raw file' to visualize the raw recordings. Event codes 13-24 indicate responses to face images, and we will combine them to a single group called 'faces'. To do so, select events 13-24 and from the menu select "Events > Duplicate groups". Then select "Events > Merge groups". The event codes are duplicated first so we do not lose the original 13-24 event codes. 4_duplicate_groups_faces.jpg

    5_merge_groups_faces.jpg

  • Event codes 49-60 indicate responses to object images, and we will combine them to a single group called 'objects'. To do so, select events 49-60 and from the menu select Events->Duplicate groups. Then select Events->Merge groups. No screenshots are shown since this is similar to above.

  • We will now import the 'faces' and 'objects' responses to the database. Select "File > Import in database".

    • 10_import_in_database.jpg

  • Select only two events: 'faces' and 'objects'
  • Epoch time: [-200, 800] ms
  • Remove DC offset: Time range: [-200, 0] ms
  • Do not create separate folders for each event type

11_import_in_database_window.jpg

Select files

  • Drag and drop all the face and object trials to the Process1 tab at the bottom of the Brainstorm window.
  • Intuitively, you might have expected to use the Process2 tab to decode faces vs. objects. But the decoding process is designed to also handle pairwise decoding of multiple classes (not just two classes) for computational efficiency, so more that two categories can be entered in the Process1 tab.

12_select_files.jpg

Decoding with cross-validation

Cross-validation is a model validation technique for assessing how the results of our decoding analysis will generalize to an independent data set.

  • Select process "Decoding > SVM decoding"



  • 13_pipeline_editor_select_decoding.jpg

  • * Low-pass cutoff frequency: If set, it will apply a low-pass filter to all the input recordings.

    • Matlab SVM/LDA: Require Matlab's Statistics and Machine Learning Toolbox.
      These methods do a k-fold stratified cross-validation for you, meaning that each fold will contain the same proportions of the two types of class labels (option "Number of folds").

    • LibSVM: Requires the LibSVM toolbox (download and add to your path).
      The LibSVM cross-validation may be faster but will not be stratified.

  • The process will take some time. The results are then saved in a file in the new decoding folder.

    cv_file.gif

  • If you double click on it you will see a decoding curve across time.

    cv_plot.gif

Permutation

This is an iterative procedure. The training and test data for the SVM/LDA classifier are selected in each iteration by randomly permuting the samples and grouping them into bins of size n (you can select the trial bin sizes). In each iteration two samples (one from each condition) are left out for test. The rest of the data are used to train the classifier with.

  • Select process "Decoding > Classification with cross-validation". Set options as below:

    perm_process.gif

  • Trial bin size: If greater than 1, the training data will be randomly grouped into bins of the size you determine here. The samples within each bin are then averaged (we refer to this as sub-averaging); the classifier is then trained using the averaged samples. For example, if you have 40 faces and 40 scenes, and you set the trial bin size to 5; then for each condition you will have 8 bins each containing 5 samples. Seven bins from each condition will be used for training, and the two left-out bins (one face bin, one scene bin) will be used for testing the classifier performance.

  • The results are saved in a file in the new decoding folder.

    perm_file.gif

  • Right-click > Display as time series (or double click).

    perm_plot.gif

Acknowledgment

This work was supported by the McGovern Institute Neurotechnology Program to PIs: Aude Oliva and Dimitrios Pantazis. http://mcgovern.mit.edu/technology/neurotechnology-program

References

  1. Khaligh-Razavi SM, Bainbridge W, Pantazis D, Oliva A (2016)
    From what we perceive to what we remember: Characterizing representational dynamics of visual memorability. bioRxiv, 049700.

  2. Cichy RM, Pantazis D, Oliva A (2014)
    Resolving human object recognition in space and time, Nature Neuroscience, 17:455–462.

Additional documentation

  • Forum: Decoding in source space: http://neuroimage.usc.edu/forums/showthread.php?2719





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


  • MoinMoin Powered
  • Python Powered
  • GPL licensed
  • Valid HTML 4.01