DSPM workflow for between statistical analysis

Hi! im having two group of 25 participants, each subject did 5 blocs of a visual experiment and I want to compare the source amplitude of the ERF on certains ROI.

i noticed DSPM recently changed and im a bit lost so here my workflow for the source only

1-i localise my sources with DSPM for the 5 blocs individually - got unscaled values

2-im averaging my maps (weighted average) to have 1 map per subject

3-put everything in absolute values

4-project to default anatomy

5-applying spatial smoothing (3mm)

6-averaging specific time course (example 200 to 300 ms to have the activity related to my ERF)

7-extracting my value on my ROI (based on the destrieux) and run my ANOVA on JASP or R

8-dancing in the snow because the workflow work nicely

Anyone have an advice on my workflow? something I miss?

Thank you very much a of you, have a nice day

This looks good to me.
These unscaled dSPM values are good for group analysis. If you are displaying them for individual subjects, just keep in mind that they are not "real dSPM" measures: https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Source_map_normalization
Enjoy the snow :slight_smile:

Dear Francois,
I have a question concerning the above described workflow (which I marginally readapted to what I am doing):

1-I localise single subjects' condition averages (average of all trials of condition 1 in subject 1) with DSPM (unscaled)

2-put everything in absolute values

3-project to default anatomy

4-apply spatial smoothing (3mm)

at this point I would like to compute a t-test against 0. Source maps however contain all positive values (of course, we took the absolute), so when I contrast against 0 everything is significant.
I baseline corrected and detrend the data to try to mitigate this result... but I am not fully sure this is correct. Any advice?

Also, I wanted to ask you if there is a way to change the options of the cluster permutation test. In particular in fieldtrip one of the options is to use different clusterstatistics = how to combine the single samples that belong to a cluster, 'maxsum', 'maxsize', 'wcm'. I guess brainstorm uses the default maxsum, but is it possible to use the other two options (i.e., maxsize and wcm)?

at this point I would like to compute a t-test against 0. Source maps however contain all positive values (of course, we took the absolute), so when I contrast against 0 everything is significant.

Indeed, strictly positive values are going to be always significantly different from zero.

I baseline corrected and detrend the data to try to mitigate this result... but I am not fully sure this is correct. Any advice?

@pantazis Any suggestion ?

Also, I wanted to ask you if there is a way to change the options of the cluster permutation test. In particular in fieldtrip one of the options is to use different clusterstatistics = how to combine the single samples that belong to a cluster, 'maxsum', 'maxsize', 'wcm'. I guess brainstorm uses the default maxsum, but is it possible to use the other two options (i.e., maxsize and wcm)?

This was initially offered as an option, and then hard-coded to maxsum to restrict the number of options and make the process more accessible to non-expert users.

You can still change this parameter by adding an option clusterstatistic manually to the process call. Generated the Matlab script corresponding to your process call and simply add a line 'clusterstatistic', 'maxsize'm, ... anywhere in list of options.

Thank you Francois,
I was able to change the cluster permutation statistic.
One note concerning dSPM absolute values, when I don't use absolute values, results look quite good... both anatomically and in terms of statistics I get values that make sense (at least is not all is significant!). Just to be fully clear the order is now:

1-I localise single subjects' condition averages (average of all trials of condition 1 in subject 1) with dSPM (unscaled) constrained

2-project to default anatomy

3-apply spatial smoothing (3mm)

4-t-test against 0

Unfortunately, if you apply an absolute value, steps 2 and 3 are may not make much sense. These are two operations where we can't keep the orientation of the dipoles, and therefore the sign becomes ambiguous and/or detrimental.

  • Smoothing on the cortex surface averages locally values that are positive or negative only because of their orientations.
  • Projecting two subjects on the template may cause positive values from one subject and negative values to overlap for no reason. If you compute an average, you might get an erroneous zero.

Thank you Francois, it makes sense.
Please let me know if you have a solution concerning the ttest against 0... as I said, if I use absolute values I have the problem that a ttest against 0 is always significant.

Maybe @pantazis or @Sylvain would have suggestions?

Hi Lorevi,

Unfortunately there is no solution to conduct a ttest against 0 for source maps. You can use a ttest when comparing two conditions, or one condition against the baseline. Also, when your data are in the frequency domain, you can use ttests on ERD/ERS (event-related dysynchronization/synchronization; which is a percent change against the baseline). As Francois wrote, source maps need to be converted to positive values to avoid the ambiguity in sign caused by the folding of the cortical manifold.

Best,
Dimitrios

Thank you pantazis!
I would have one last question, you and Francois wrote that: source maps need to be converted to positive values to avoid the ambiguity in sign caused by the folding of the cortical manifold.

With the main issue here being (as Francois wrote): Projecting two subjects on the template may cause positive values from one subject and negative values to overlap for no reason. If you compute an average, you might get an erroneous zero.

My question is, why don't we want to consider the directionality of these source dipoles across participants? I understand that if I look for areas and time points containing dipoles oriented in the same way across participants the likelihood of getting something significant is much much lower. But still, let's consider two hypothetical a scenarios:
A) dipoles across participants are not oriented in the same way... they vary a lot, so when we average them we get nothing unless we take the absolute value.
B) dipoles across participants are oriented roughly in the same way... they are rather consistent (at least in some brain areas), so theoretically we can average them without taking the absolute value.
Is scenario B even possible? And if it is... wouldn't scenario B be "better" than scenario A?

If you have any reason to consider that the brain sources of interest in your study are all oriented in the same way AND that the cortex surface (used as your source space and constraining the orientation of the dipoles used in your model) align well across subject in these regions, then yes, you can probably consider the scenario B "keep the sign and consider it is meaningful across subjects".

In the general case, I don't see any particular reason why this could be a valid statement (and I don't have any clear idea on how to test it), but you are free to add this to the postulates of your experiment. You might get questions from reviewers, but nothing prevents you from do it technically.

Lorevi, a key problem is that even for single subjects the cortical activation maps vary greatly because the sign follows the cortical manifold. For example the activation maps are positive on one side of the sulcus and negative on the other. You can see this by right clicking on the colorbar and disabling the use of absolute values (remember to activate it again afterwards). Thus spatial smoothing will have a detrimental effect by largely canceling out activity. If you want to use real values then do not apply spatial smoothing. A simple experiment in your dataset will then convince you that combining subjects this way results in random looking maps with only some weak signals in primary cortices.

Dear Francoism, dear Pantazis,
thank you for this very useful discussion. I now better understand the pipeline and why we should use absolute values. Is also good to know that (in principle) you can avoid this step, but at your own risk :wink:
Once again, thank you!