I’m comparing two experimental conditions in source space using a paired t-test. I want to focus on a specific time window (e.g., 150–250 ms), and I’m unsure about the best practice for averaging over time.
I’ve tried two approaches:
Averaging the source activity over the time window before running the t-test (i.e., each file contains one value per source).
Running the t-test on the full source time series, selecting the time window in the test settings and checking the "Average time window" option.
These two approaches give me different results!
My question is:
What is the recommended method in Brainstorm for this type of comparison?
For context:
Source estimation was done using minimum norm (dSPM), unconstrained sources
I’m not averaging across scouts, only over time
I’m comparing within-subject (paired) across two conditions
Both should lead to the same results (if the average approach is the same).
In the second option, the Average time window in the Test > Parametric test: paired computes the only mean of the sources (as you are using unconstrained there are three time series per vertex) on the fly. This is the same as using the process Average > Average time without selection Use absolute values, and selection Arithmetic average as function.
Lastly, as only value is plotted, it corresponds to the t-statistic for the minimum p-value of the three orientations per vertex.
Could it be that Absolute values was used in Option 1?
As you are working with unconstrained sources, if you want a statistic per vertex in the source grid you may want to consider to flat the sources. See these links:
Yes, you're right — I was previously computing the arithmetic average using absolute values.
Thank you for the clarification regarding flattening the sources before proceeding to statistical analysis. I just want to confirm that my current analysis pipeline is correct for comparing current amplitude in source space between two experimental conditions in two different groups.
Here is the workflow I plan to follow:
Compute noise covariance (using a baseline period before the task starts)
Compute sources using Minimum Norm Imaging (dSPM, unconstrained)
Flatten the sources (using norm: sqrt(x^2+y^2+z^2)
On the flattened sources, is it recommended to perform baseline normalization using z-score?
Average over the selected time window (without using absolute values)
Perform a paired t-test (within subjects, between conditions)
Is this pipeline appropriate?
And specifically, should the z-score normalization be applied after flattening, rather than before?
Since dSPM returns sources as z-scores, these sources have been normalized with respect to the noise level (noise covariance) for each source time series. Thus, further z-score normalizing these normalized (dSPM) sources to the baseline should lead to very similar results.
z-score normalization should be used before flattening
I applied the z-score baseline normalization, and then ran the paired t-test, but the results I got were different compared to when I did not apply the baseline normalization.
Do you think this difference in results is due to the z-score normalization step after the flattening?