Hello everyone;
I would like to get some assistance on how to perform a data analysis step on brainstorm (but the question might as well be related to MEG theory).
I have a very simple 2 x 2 factorial design, with factors Cue (A1, A2) and Target (B1, B2). After performing an ANOVA (similarly to what we would do on fMRI design) we realised that part of the lack of expected results might be due to the comparison itself: i.e., by building our contrast to find the interaction (A2B2 - A2B1) - (A1B2 - A1B1) we were killing our result.
We are now trying to go one step behind and analyse the data by performing an “All Cues in the Left Hemisphere (process 2A) vs All conditions in the Cue Hemisphere (process 2B)” and expecting to see a frontoparietal activity, but we can’t find that as well (though, if you look at the single images of each subject we have that.
I feel that we are making the same “fmri-biased” mistake, that we did in the ANOVA, because we should compare the “All conditions in the Left Hemisphere” to a baseline (otherwise the bilateral activation we are expecting will, obviously, disappear).
My question (finally) is: what is the best brainstorm-MEG-way to test against a baseline? should the baseline be selected as the time before any stimulus is presented in a trial? Should I estimate the baseline using permutation?
Thank you for your time;
@Sylvain: any suggestion ?
I don’t have the details of your study design or neuroscience question, but testing against baseline can be typically achieved by selecting a prestimulus time window. You can then run a t-test or non parametric permutation alternative.
Hope this helps.
Thanks for the reply, Sylvain;
As per the design, it is literally a 2 x 2 factorial design with cue (valid / invalid) and Target (seen / unseen gabor like stimuli). 18 subjects.
For the scientific question, we are looking at spatiotemporal dynamics of consciousness and attention and expect to see a bilateral frontoparietal activation in the cue-target period (300 ms interval), together with a unilateral visual activation (stimuli are lateralized to the Left or Right VF).
Ultimately, I am trying to get to an ROI based analysis, but still I wanted to get first a “all cues vs baseline” - like whole brain contrast, and then draw my ROIs based on that and further analyze the data (leading to connectivity). Is there a way you see this to be potentially conducted? We have over one second of pre-cue time interval to draw a baseline and over one second after the target is presented (during subject’s response).
Best,
A practical way to approach this would be to produce wMNE source maps over the entire trial and use the pre-cue baseline to standardize source amplitudes (I recommend you look at absolute source amplitude as you are more interested in activation effects that oscillatory components for now) with z-scoring. You can then apply an empirical threshold on the z maps above 2 to define the ROIs and realize whether activations project to sensible brain areas. You can do this in every subject or on the grand average, after z-maps are smoothed, then projected on an anatomy template, before averaging across the group.
Let me know if this needs clarification. Cheers!
1 Like
Terrific! Thanks Sylvain;
Just to double check that everything is clear (and for future users) let me break this into small pieces:
-
produce wMNE source maps over the entire trial
the tutorial can be found here:
https://neuroimage.usc.edu/brainstorm/Tutorials/TutSourceEstimation?highlight=(wMNE)|(source)|(maps)
-
use the pre-cue baseline to standardize source amplitudes - absolute source amplitude - z score
the tutorial can be found here under the Z score section:
https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation
-
You can then apply an empirical threshold on the z maps above 2 to define the ROIs and realize whether activations project to sensible brain areas.
–> right click on the source image, select “Maximum: Custom”, and threshold the minimum above 2
-
Smooth, Project to the template, average across the group, display results.
I’ll proceed this way, meanwhile if anything seems incorrect do not hesitate to point it out;
1 Like
Short comment about step 2: other researchers recommend computing the Z-score only from non-rectified signals (no absolute value), for various reasons: rectified baseline not centered on zero, loss of linearity…
This is the approach that was preferred in this pipeline: https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Average:_Single_subject
But this is a matter of scientific tastes and beliefs 
1 Like