# Interpretation of the Decoding output

Hi, I'm currently trying to use the decoding process implemented in brainstorm. The experimental design is easy, I have two dirrent type of stimuli. I want to use decomposition to understand if the input data, in this case sensors level High Density EEG (128 channels), are able to discriminate the type of stimulus I presented to the partecipant. I obtained the plot of the accuracy of the decoding, but I'm not sure how to interpret it. My epoch length is 1000ms. My accuracy reach the peak of 75% at 300ms and continue to stay at 75% for around 200ms, than at 550ms it goes down to 60%. So, it is possible to infer that the time window that goes from 300 to 500ms is the one in which the signal can better discriminate between the two stimuli? Moreover, it is possible to use this accuracy as a guide for a source analysis? I mean, if between 300 and 500 ms the brain activity can discriminate the stimulus, if we recostruct the source in that specific time window and we find activation in different areas, can we infer that the activation of different areas in that spefic time window is the reason why the signal in that specific time window differs between the two stimuli?
Thanks

Hello,

So, it is possible to infer that the time window that goes from 300 to 500ms is the one in which the signal can better discriminate between the two stimuli?

Your interpretation is correct. During this time window, the classifier is able to discriminate between the two conditions above chance. If you have 100 unlabeled trials use this model to guess from which condition, 75 would be correctly sorted and 25 incorrectly sorted. We can infer from this that the brain responds differently to the two stimuli.

if we reconstruct the source in that specific time window and we find activation in different areas, can we infer that the activation of different areas in that spefic time window is the reason why the signal in that specific time window differs between the two stimuli?

Some of the differences you observe at source level between the two conditions are probably involved in this ability of the classifier to separate the two sets of trials. However, don’t over-interpret this result: it doesn’t necessarily mean that all the differences you observe are meaningful or significant. This decoding accuracy graph tells you when there are differences between conditions, but not where or how.

Once you identified when there are differences, you can run statistical tests at the sensor or source level to understand where are these differences coming from. Limiting the test to a specific averaged time window makes permutation tests at the source level computationally affordable.

Hello Francois,
If we use MVPA to find that a particular time window can distinguish between two conditions, it can be understood that MVPA helps us determine the selection of a valid time window for subsequent analysis?For example, the traditional grande Average ERP analysis and the time window for the source-localization analysis.
Is this just a data-driven way to look for specific time Windows of significant difference?

Exactly!

Hi, Francois,
In the tutorial, the MVPA method is implemented on the condition of a single subject. How can I compute at the group level? I mean, for example, If I have 50 subjects, and each subject has conditions A and B, I want to use the MVPA method to find the difference between conditions A and B for averaged in these 50 subjects

Hi zxk11100,

MEG sensor patterns are largely subject-specific because the cortical anatomy differs per individual thus resulting in different topographies of magnetic fields outside the head. Thus when using MVPA it is recommended to run it within subject, for example contrasting A vs. B to produce one decoding time course per individual. Eventually you will end up with N time courses, where N is the number of subjects, which you can subject to typical statistical tests against 50% (chance level decoding) across your N data samples. This would give you significant time points.

While in principle you can run MVPA across participants by combining all their data, this will only be sensitive to common patterns across subjects. Such common patterns may exist, but will be highly coarse with probably little useful information. Since EEG patterns are more blurry that MEG, this could work better for EEG. But I recommend against such analyses for both MEG and EEG.

Notice, this recommendation is consistent with the common fMRI practice where the voxel patterns are different per individual and you always run MVPA within subject.

Best,
Dimitrios