Cortical surface decoding improved with deep structures

Hi all,

I've been doing for a study decoding (searchlight) analyses on cortical surface using brainstorm and reconstructed sources.

I recently conducted a control analysis in order to test if my inverse operator was robust enough and see if I could find the same results by adding deep structures within the reconstruction pipeline. I was expected to see a decreased of decoded signal in some parts of the cortical surface, probably captured by neighbouring deep structures.

In fact, I found the opposite: adding deep structures boosts my signal, which I can now decode in areas where I could not decoding it before. I am not really sure why this is the case right now, my main guess would be either that deep structures are capturing essentially noise (and so increased SNR in the reconstructed cortical surface) or maybe something to do with overfitting... in any case, I wanted to share it to you, and if you have any guess, let me know !

Best,

Aurelien

Interesting. Indeed, changing the source space alters the way the different components in the EEG/MEG signals get mapped onto the brain. In ways that I guess we can't predict.

For a full comparison, it would be interesting to compare it also with fully unconstrained source models: one with a regular grid of source points limited to the brain, and one with sources in the entire head (or even the entire MRI volume).