Machine learning: Decoding / MVPA

Authors: Dimitrios Pantazis, Seyed-Mahdi Khaligh-Razavi, Francois Tadel,

This tutorial illustrates how to run MEG decoding using support vector machines (SVM).

License

To reference this dataset in your publications, please cite Cichy et al. (2014).

Description of the decoding functions

Two decoding processes are available in Brainstorm:

These two processes work in a similar way, but they use a different classifier, so only SVD is demonstrated here.

In the context of this tutorial, we have two condition types: faces, and objects. We want to decode faces vs. objects using 306 MEG channels.

Download and installation

Import the recordings

Select files

Decoding with cross-validation

Cross-validation is a model validation technique for assessing how the results of our decoding analysis will generalize to an independent data set.

Permutation

This is an iterative procedure. The training and test data for the SVM/LDA classifier are selected in each iteration by randomly permuting the samples and grouping them into bins of size n (you can select the trial bin sizes). In each iteration two samples (one from each condition) are left out for test. The rest of the data are used to train the classifier with.

Acknowledgment

This work was supported by the McGovern Institute Neurotechnology Program to PIs: Aude Oliva and Dimitrios Pantazis. http://mcgovern.mit.edu/technology/neurotechnology-program

References

  1. Khaligh-Razavi SM, Bainbridge W, Pantazis D, Oliva A (2016)
    From what we perceive to what we remember: Characterizing representational dynamics of visual memorability. bioRxiv, 049700.

  2. Cichy RM, Pantazis D, Oliva A (2014)
    Resolving human object recognition in space and time, Nature Neuroscience, 17:455–462.

Additional documentation





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Decoding (last edited 2020-04-29 14:27:50 by ?DimitriosPantazis)