Hi team,
I’m looking for some guidance/clarification on normalizing before extracting amplitudes from scouts for statistical analysis. I’ve computed dSPM-unscaled MEG ALL (Constr) on the average over trials of each of two conditions measured in each subject. The numbers of trials per condition are exactly matched within each subject. Each subject’s noise covariance is computed from the single-trial baselines (-700 to -400 ms) of all trials. I then identify the peak response on the cortical surface and grow a small scout around it; the amplitude of this peak is, in all subjects and conditions, between 0.1 and 1 (no units). My goal is to express each subject’s two scouts time series as a z-score with respect to the baseline time period (also -700 to -400 ms), where there is minimal motion and task-related activity. I expect that these peak-amplitude z’s would then be ~10.
I understand from Tutorial 22 that I can use Sources > Scale averaged dSPM to visualize the cortical map as z-scores, but that these values are not suitable for statistical analysis. I’ve also seen that Sources > Standardize > Baseline normalization can do what I think I want to do. My question is whether I can baseline-normalize a dSPM, given that it is inherently standardized relative to the noise covariance that I calculated from the same baseline time period. (If it matters, [-700 -400] is also the baseline I used for removing DC offset when importing the single trials into the database).
I read on the forum that you can z-score either the dSPM or the current density map and obtain the same numbers, and I confirmed this in one subject. The reason I originally chose dSPM instead of current density was because “Recommended options” suggests dSPM for source maps of trial averages, which I use to create the scouts. Given the processing I’ve described, is it okay to proceed with baseline-normalizing each subject’s dSPM with respect to [-700 -400] and performing within-subject and between-subjects statistics on those amplitudes?
Many thanks!
Sara