A question about averaging across files

Hi Hyeong,

I guess you are processing averages.
The averaging processes are weighted by the number of trials used to create the averages. This corresponds to the field .nAvg in the file structures.
Process average: sum_trials(nAvg(i)*data(i)) / sum_trials(nAvg(i))

Something similar is used to compute the difference of means:
sum_trials(nAvgA(i) * (dataA-dataB)) / sum_trials(nAvg(i))

This works better if the number of trials for FilesA(i) and FilesB(i) is the same for all the pairs #i.
But then you can have different number of trials between averages #i and #j without any problem.
Does this make sense?

You are right to point this out, I will add a comment to following processes to indicate that when processing averages, the average is a weighted average:

  • Process1 > Average > Average files
  • Process2 > Test > Difference of means
  • Process2 > Test > Parametric t-test
  • Process2 > Test > Parametric t-test (paired)

But except for this, it is great that you’re getting similar outputs with FieldTrip functions. I haven’t done these tests myself yet.
Please note that this is work in progress, we’re still working on those functions. In particular, all the display of the clusters is still missing.
Make sure you update Brainstorm very frequently…

Cheers,
Francois