Standardized AND normalized scouts time series from dSPM

Hi team,

I’m looking for some guidance/clarification on normalizing before extracting amplitudes from scouts for statistical analysis. I’ve computed dSPM-unscaled MEG ALL (Constr) on the average over trials of each of two conditions measured in each subject. The numbers of trials per condition are exactly matched within each subject. Each subject’s noise covariance is computed from the single-trial baselines (-700 to -400 ms) of all trials. I then identify the peak response on the cortical surface and grow a small scout around it; the amplitude of this peak is, in all subjects and conditions, between 0.1 and 1 (no units). My goal is to express each subject’s two scouts time series as a z-score with respect to the baseline time period (also -700 to -400 ms), where there is minimal motion and task-related activity. I expect that these peak-amplitude z’s would then be ~10.

I understand from Tutorial 22 that I can use Sources > Scale averaged dSPM to visualize the cortical map as z-scores, but that these values are not suitable for statistical analysis. I’ve also seen that Sources > Standardize > Baseline normalization can do what I think I want to do. My question is whether I can baseline-normalize a dSPM, given that it is inherently standardized relative to the noise covariance that I calculated from the same baseline time period. (If it matters, [-700 -400] is also the baseline I used for removing DC offset when importing the single trials into the database).

I read on the forum that you can z-score either the dSPM or the current density map and obtain the same numbers, and I confirmed this in one subject. The reason I originally chose dSPM instead of current density was because “Recommended options” suggests dSPM for source maps of trial averages, which I use to create the scouts. Given the processing I’ve described, is it okay to proceed with baseline-normalizing each subject’s dSPM with respect to [-700 -400] and performing within-subject and between-subjects statistics on those amplitudes?

Many thanks!
Sara

Yes, you can.
However, this would produce the same result as computing the Z-scores from the original minimum norm maps (no dSPM standardization). Zscore(dSPM(MNE)) = Zscore(MNE).
In this case, to report your results, I'd recommend you indicate that you used Z-scored MN maps, instead of Z-scored dSPM maps, as the effect of the dSPM normalization disappears. Using the dSPM in your scripts or mentioning it in your reports may add some confusion.

dSPM is a different technique to obtain a similar type of normalization as the Z-score wrt baseline.
Use one OR the other.

Thanks, Francois. I'm following your advice. Out of curiosity, is there an interpretable meaning of dSPM amplitudes? My values are mostly around 0.1 or 0.2 for the peak evoked response, sometimes as high as 1 or 2. I'm leaning towards z(MN) just because z-scores with respect to the baseline time period are so intuitive.

dSPM values can be interpreted similarly to Z-scores.
Example: Z-score=1.65 => p-value=0.049; Z=2.35 => p=0.009
https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Standardization_of_source_maps

We recommend you only use these thresholds for the interactive exploration of the results. For properly assessing the significance of the minimum norm results, use non-parametric tests across participants.
https://neuroimage.usc.edu/brainstorm/Tutorials/Statistics
https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows