Hello,
I've been using the weighted difference function in Brainstorm to compute ERP contrast waveforms (ΔERN = error − correct) between conditions that have substantially unequal trial counts (e.g., 150 correct vs 15 error responses). My understanding is that this method helps account for the difference in signal reliability due to varying trial numbers, by scaling the contribution of each condition based on effective number of average (Leff).
However, I recently received reviewer criticism labeling this function as a "weird normalization", so I would appreciate some clarification:
What is the rationale behind weighted difference in Brainstorm? Perhaps I got things all wrong.
Is it considered good practice when contrasting ERP conditions with asymmetric trial counts?
In which cases is it recommended or preferred over a simple subtraction (A-B)?
Are there any references or documentation that justify this use?
Thanks a lot for any insights or resources you can share.
Best
Hello again,
I'm kindly following up on the question. I'd greatly appreciate any input on the topic, as I'm currently finalizing a manuscript and would like to be sure I’m correctly justifying this analysis step.
Brainstorm’s documentation notes that a paired t-test can be a preferable way to contrast conditions with imbalanced trials, since t-statistics handle unequal sample sizes well and provide significance levels for the difference (https://neuroimage.usc.edu/brainstorm/Tutorials/Auditory#:~:text=Student's%20t). In general, readers are more familiar with standard solutions (such as trial equalization or explicit statistical comparison) than with a weighted difference normalization.
That said, weighted difference is not “wrong” – it does address the SNR issue as intended, and it can be a valid exploratory tool for visualizing the true effect. It preserves all available trials and attempts to balance their influence, which can be better than a naïve subtraction when the trial counts are hugely different. However, because it changes the amplitude scale of the difference, you should be cautious in interpretation. It may be wise to clarify this in your manuscript if you use it, or alternatively, use the simpler subtraction but accompany it with an understanding (or analysis) of the noise levels. Weighted difference is most useful in cases of large trial-count asymmetries where you want to use all of your data, as in your present situation.