Z-score procedure

Hi,

I have some questions about the way that z-score normalization for sources is set up in brainstorm. I am analyzing data for 18 subjects, and am interested in z-score normalizing each subject’s source data before projecting to group space. I import the data run by run for each subject, and I source localize [conditions] for each run separately to account for head motion between runs. It seems that the problem with doing this is that I cannot z-score normalize the data across conditions for a run, but rather have to normalize each condition separately. This is particularly a problem for me because I have different numbers of trials across conditions. (Hence, the condition with more trials, even 10% more, ends up with higher amplitude sources and “takes all” in terms of significance.) So, is there a way to calculate a z-score across conditions for each run, or perhaps for the entire experiment?

Best,

Suzanne

Hi Suzanne:

A way to proceed would be to average the source maps across runs, for each condition, then compute the Zscores for each condition and then average across subjects. What do you think?

Hi Sylvain,

I’m sorry if I did not communicate this well, but I had done exactly what you have recommended. My point in mentioning that I import that data run by run was to demonstrate how my data is organized in the bst database. Indeed, I do average sources across runs by condition, and then normalize and smooth. The problem is that my conditions are not matched in terms of number of trials. So, when the z-scores are computed by condition (as opposed to normalizing to all trials in a run, for example), the number of trials used to calculate the std dev is different. What this translates into in terms of my statistical results is the condition with more trials showing overwhelmingly higher amplitude activity in all regions for all times. Qualitatively, when I create a source ROI/scout and look at a region that should not vary based on condition, such as the occipital cortex in my experiment, any condition with more trials shows a timecourse that is translated upward, in terms up amplitude, at all timepoints. This improves as I more closely match my conditions in terms of number of trials. My question is whether I can normalize across runs or across the entire experiment rather than by condition, which I believe would eliminate the problem that I am having.

Thanks!

Thank you, Suzanne:

I am still not sure how we could proceed when you suggest “normalizing to all trials in a run”. If you suggest mixing all trials from all conditions within the same run, I’m not sure this would make sense. Standardization aka Z-scoring aka normalizing is a way to address the issue of heterogeneous levels of expected current magnitudes across the cortex. This can be due to inter individual variability and within a participant, different levels of expected current magnitudes at various locations in the brain because of cell physiology and density, and signal-to-noise levels (e.g. deeper vs cortical sources).
Hence z-scoring is a convenient way to standardize local currents in amplitude with reference to their own fluctuations on a baseline.
The baseline has to be defined from the same number of trials as the one used to the response time segment; hence you cannot pool trials from different conditions together if the respective number of events used on the time segment of interest (e.g., post-stim, typically) varies across conditions.

Brainstorm does keep track of the number of trials when you pool data across runs: it is used when computing differences in source amplitudes between conditions with different number of trials.

What I would recommend is that you average source maps across runs, per condition, just like you did; Compute the difference you are interested in between conditions (the respective numbers of trials across conditions are used here) , and take the z-score of these differences.

Does it make sense and help a bit?

Hi Sylvain,

Yes, your advice helped tremendously!

Thanks!

Suzanne