= Machine learning: Decoding / MVPA = ''Authors: Dimitrios Pantazis''', '''Seyed-Mahdi Khaligh-Razavi, Francois Tadel, '' This tutorial illustrates how to run MEG decoding using support vector machines (SVM). <> == License == To reference this dataset in your publications, please cite Cichy et al. (2014). == Description of the decoding functions == Two decoding processes are available in Brainstorm: * Decoding > SVM classifier decoding * Decoding > max-correlation classifier decoding These two processes work in a similar way, but they use a different classifier, so only SVD is demonstrated here. * '''Input''': the input is the channel data from two conditions (e.g. condA and condB) across time. Number of samples per condition do not have to be the same for both condA and condB, but each of them should have enough samples to create k-folds (see parameter below). * '''Output''': the output is a decoding time course, or a temporal generalization matrix (train time x test time). * '''Classifier''': Two methods are offered for the classification of MEG recordings across time: support vector machine (SVM) and max-correlation classifier. In the context of this tutorial, we have two condition types: faces, and objects. We want to decode faces vs. objects using 306 MEG channels. == Download and installation == * From the [[http://neuroimage.usc.edu/bst/download.php|Download]] page of this website, download the file: '''subj04NN_sess01-0_tsss.fif''' * Start Brainstorm (Matlab scripts or stand-alone version). * Select the menu File > Create new protocol. Name it "'''TutorialDecoding'''" and select the options: * "'''Yes, use protocol's default anatomy'''", * "'''No, use one channel file per condition'''". <
><
> {{attachment:1_create_new_protocol.jpg||width="400"}} == Import the recordings == * Go to the "functional data" view (sorted by subjects). * Right-click on the TutorialDecoding folder > New subject > '''Subject01''' <
>Leave the default options you defined for the protocol. * Right click on the subject node (Subject01) > '''Review raw file'''''.'' <
>Select the file format: "'''MEG/EEG: Neuromag FIFF (*.fif)'''"<
>Select the file: '''subj04NN-sess01-0_tsss.fif''' <
> * {{attachment:2_review_raw_file.jpg||width="440"}} <
><
> {{attachment:2_review_raw_file2.jpg||width="440"}} <
><
> * Select "Event channels" to read the triggers from the stimulus channel. <
><
> {{attachment:3_event_channel.jpg||width="320"}} * We will not pay attention to MEG/MRI registration because we are not going to compute any source models. The decoding is done on the sensor data. * Double click on the 'Link to raw file' to visualize the raw recordings. Event codes 13-24 indicate responses to face images, and we will combine them to a single group called 'faces'. To do so, select events 13-24 and from the menu select "'''Events > Duplicate groups'''". Then select "'''Events > Merge groups'''". The event codes are duplicated first so we do not lose the original 13-24 event codes. {{attachment:4_duplicate_groups_faces.jpg||width="500"}} <
><
> {{attachment:5_merge_groups_faces.jpg||width="500"}} * Event codes 49-60 indicate responses to object images, and we will combine them to a single group called 'objects'. To do so, select events 49-60 and from the menu select Events->Duplicate groups. Then select Events->Merge groups. No screenshots are shown since this is similar to above. * We will now import the 'faces' and 'objects' responses to the database. Select "'''File > Import in database"'''. <
><
> {{attachment:10_import_in_database.jpg||width="500"}} <
><
> * Select only two events: 'faces' and 'objects' * Epoch time: [-200, 800] ms * Remove DC offset: Time range: [-200, 0] ms * Do not create separate folders for each event type<
> {{attachment:11_import_in_database_window.jpg||width="500"}} == Select files == * Drag and drop all the face and object trials to the Process1 tab at the bottom of the Brainstorm window. * Intuitively, you might have expected to use the Process2 tab to decode faces vs. objects. But the decoding process is designed to also handle pairwise decoding of multiple classes (not just two classes) for computational efficiency, so more that two categories can be entered in the Process1 tab.<
> {{attachment:12_select_files.jpg||width="400"}} == Decoding with cross-validation == '''Cross'''-'''validation''' is a model '''validation''' technique for assessing how the results of our decoding analysis will generalize to an independent data set. * Select process "'''Decoding > SVM decoding'''" {{attachment:13_pipeline_editor_select_decoding.jpg||width="400"}} * Select 'MEG' for sensor types * Set 30 Hz for low-pass cutoff frequency. Equivalently, one could have applied a low-pass filters to the recordings and then run the decoding process. But this is a shortcut to apply a low-pass filter just for decoding without permanently altering the input recordings. * SVM decoding requires the LibSVM toolbox ([[http://www.csie.ntu.edu.tw/~cjlin/libsvm/#download|download]] and add to your path). * Select 100 for number of permutations. Alternatively use a smaller number for faster results. * Select 5 for number of k-folds * Select 'Pairwise' for decoding. Hint: if more that two classes were input to the Process1 tab, the decoding process will perform decoding separately for each possible pair of classes. It will return them in the same form as Matlab's 'squareform' function (i.e. lower triangular elements in columnwise order) * The decoding process follows a similar procedure as Pantazis et al. (2018). Namely, to reduce computational load and improve signal-to-noise ratio, we first randomly assign all trials (from each class) into k folds, and then subaverage all trials within each fold into a single trial, thus yielding a total of k subaveraged trials per class. Decoding then follows with a leave-one-out cross-validation procedure on the subavaraged trials. * For example, if we have two classes with 100 trials each, selecting 5 number of folds will randomly assign the 100 trials in 5 folds with 20 trials each. The process than will subaverage the 20 trials yielding 5 subaveraged trials for each class. <
> {{attachment:14_svm_decoding_pairwise.jpg||width="380"}} * The process will take some time. The results are then saved in a file in the 'decoding' folder<
> {{attachment:15_svm_decoding_pairwise_results.jpg||width="800"}} * For temporal generalization, repeat the above process but select 'Temporal Generalization'. * To evaluate the persistence of neural representations over time, the decoding procedure can be generalized across time by training the SVM classifier at a given time point t, as before, but testing across all other time points (Cichy et al., 2014; King and Dehaene, 2014; Isik et al., 2014). Intuitively, if representations are stable over time, the classifier should successfully discriminate signals not only at the trained time t, but also over extended periods of time.<
> {{attachment:16_svm_decoding_temporalgeneralization.jpg||width="380"}} * The process will take some time. The results are then saved in a file in the 'decoding' folder {{attachment:17_svm_decoding_temporalgeneralization_results.jpg||width="800"}} == References == 1. Cichy RM, Pantazis D, Oliva A (2014)[[http://www.nature.com/neuro/journal/v17/n3/full/nn.3635.html|Resolving human object recognition in space and time]], Nature Neuroscience, 17:455–462. 1. Guggenmos M, Sterzer P, Cichy RM (2018) [[https://doi.org/10.1016/j.neuroimage.2018.02.044|Multivariate pattern analysis for MEG: A comparison of dissimilarity measures]], NeuroImage, 173:434-447. == Additional documentation == * Forum: Decoding in source space: http://neuroimage.usc.edu/forums/showthread.php?2719 <>