Individual trials in source space

Hello,

For an experiment I ran, I had two conditions (A and B let’s call them) that I’m hoping to try and decode with a simple SVM. To do this, I obviously need data from individual trials that I can use to train/test the classifier. I totally get how to do this in sensor space, where the natural preprocessing workflow of Brainstorm naturally saves individual trials as Matlab files that I can use for my classifier. However, one thing I was hoping to try was to do some decoding in sensor space (i.e., can you decode A vs B in frontal scouts vs early visual scouts). So my question is about if there is a way to get the data for each individual trial in source space rather than in sensor space? I know how to use the GUI to grab the scouts I want and then to export the time-series within those scouts to Matlab. But those files are just a 1xN vector, where N is the number of timepoints (in this case 1000 ms) averaged across every vertex in that scout and across all the individual trials. What I’m wondering I guess is if I can get a) data in source space for every single trial that’s b) formatted so I have the timepoints for every vertex.

Does this make sense? Is there a better way too approach this? I literally just want to see if I can take advantage of MEG’s resolution and the MRI scans I have of my participants to try and differentiate my ability to decode A and B in a very coarse frontal regions and a very coarse early visual region.

Thank you so much in advance for any advice you may have.

The decoding processes in Brainstorm were written to support only MEG/EEG recordings (imported trials) and not source maps or scouts time series:
http://neuroimage.usc.edu/brainstorm/Tutorials/Decoding
https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/process/functions/process_decoding_crossval.m#L36
https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/process/functions/process_decoding_permutation.m#L36

Maybe it is possible to adapt the code to handle scout time series (to get the values for all the vertices separately, select the option “scout function = all”). However, I’m not sure this is in your best interest.
This transformation to source space does not add any information to your recordings, at the contrary it destroys some information (the data is whitened before inversion). If you are able to discriminate between two conditions A and B using all the sensors, you might not be able to do it in source space, even with all the brain. For what I understood, when using MEG for decoding, it is preferable to use unprocessed signals.

Maybe the authors can give you a better answer: @pantazis, @skhaligh, @rmcichy, @Radek, @Sylvain ?

Interesting. Thanks for the info. I have two follow up questions then (both are relatively short):

  1. Do you think then that there is any real, principled way to do some decoding between conditions A vs B where I can at least crudely differentiate things like “visual cortex (i.e., pericalcarine and lateral occipital)” and “frontal cortex (i.e., lateral and orbitofrontal cortex)”? i’m getting different waveforms over time from those large regions that i’ve created and source space, but was hoping to just train a simple classifier with that data to do some basic decoding across two pretty broad regions in the brain.

  2. What if I did something like this? Imagine that I don’t try and get the individual time points for each individual vertex in source space for every single trial. Instead, I just get a single waveform from these large scouts that I’ve created in Brainstorm and so I have like 400 trials for condition A in the frontal lobe and 400 trials for conditions B in the frontal lobe. And then what I do is train a classifier by showing it 300 waveforms (from 300 trials) for condition A, and then doing the same thing with condition B. Then, I show the classifier the 100 trials I held out and ask it “do these 100 waveforms correspond to condition A or B?” It’s a little clunky and I’d lose some power obviously, but maybe that’d be an option?

Hi,

I am afraid I have to reflect Francois’ comment that Brainstorm currently only supports decoding at sensor level. But I understand the need to perform source-based decoding and what you are asking makes sense. It is valuable to conduct decoding in sources confined within a cortical ROI. This would give you the information encoded in that ROI (though you would probably find out that a lot of information is shared because of spatial blurriness). Also, it turns out that whitening actually helps in decoding. So there are a lot to improve in the decoding functions in Brainstorm, and we somehow need to make it happen.

Best,
Dimitrios