Decoding conditions

Authors: Seyed-Mahdi Khaligh-Razavi, Francois Tadel, Dimitrios Pantazis

This tutorial illustrates how to use the functions developed at Aude Oliva's lab (MIT) to run support vector machine (SVM) and linear discriminant analysis (LDA) classification on your MEG data across time.

License

The decoding tutorial dataset remains a property of Aude Oliva’s Lab, Computer Science and AI (CSAIL), Massachusetts Institute of Technology, Cambridge, US. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from the Lab.

If you reference this dataset in your publications, please acknowledge its authors (Seyed-Mahdi Khaligh-Razavi and Dimitrios Pantazis) and cite Khaligh-Razavi et al. (2015). For questions, please contact us through the Brainstorm forum.

Presentation of the experiment

  • One subject, one acquisition run of 15 minutes.
  • The subject performs an orthogonal image categorization task.
  • During this run, the participant saw:
    • 720 stimuli (360 faces, 360 scenes)
    • Images centrally-presented (6 degree visual angle)
    • Presentation duration of each image: 500ms
  • The subject has to decide for each image (without responding) if:
    • scenes were indoor / outdoor
    • faces were male / female
  • Response: Every 2 to 4 trials (randomly determined), a question mark would appear on the screen for 1 second. At this time, participants were asked to press a button to indicate the category of the last image (male/female, indoor/outdoor), and they were also allowed to blink/swallow. This task was designed to ensure participants were attentive to the images without explicitly encoding them into memory. Participants only responded during the “question mark” trials so that activity during stimulus perception was not contaminated by motor activity.

  • Fixation: During the task, participants were asked to fixate on a cross centered on the screen and their eye movements were tracked, to ensure any results were not due to differential eye movement patterns.

  • Context: The whole experiment was divided into 16 runs. Each run contained 45 randomly selected images (each of them shown twice per run—not in succession), resulting in a total experiment time of about 50 min. The MEG data that is included here for demonstration purposes contains only the first 15 minutes of the whole session, during which 53 faces and 54 scenes have been observed.

  • Ethics: The study was conducted according to the Declaration of Helsinki and approved by the local ethics committee (Institutional Review Board of the Massachusetts Institute of Technology). Informed consent was obtained from all participants.

  • MEG acquisition at 1000Hz with an Elekta-Neuromag Triux system

Description of the decoding functions

Two decoding processes are available in Brainstorm:

These two processes work in very similar ways:

In the context of this tutorial, we have two condition types: faces, and scenes. We want to decode faces vs. scenes using 306 MEG channels. In the data, the faces are named as condition ‘201’; and the scenes are named as condition ‘203’.

Download and installation

Import the recordings

Select files

Select the Process2 tab at the bottom of the Brainstorm window.

Cross-validation

Cross-validation is a model validation technique for assessing how the results of our decoding analysis will generalize to an independent data set.

Permutation

This is an iterative procedure. The training and test data for the SVM/LDA classifier are selected in each iteration by randomly permuting the samples and grouping them into bins of size n (you can select the trial bin sizes). In each iteration two samples (one from each condition) are left out for test. The rest of the data are used to train the classifier with.

References

  1. Khaligh-Razavi SM, Bainbridge W, Pantazis D, Oliva A (2015)
    Introducing Brain Memorability: an MEG study (in preparation).

  2. Cichy RM, Pantazis D, Oliva A (2014)
    Resolving human object recognition in space and time, Nature Neuroscience, 17:455–462.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Decoding (last edited 2015-07-24 20:21:23 by FrancoisTadel)