I’m trying to create a grand average across subjects, within a condition, from a set of scouts (mostly for visualization purposes). I’ve pulled the scout files into process 1, selected “Average rows” and then “Average by scout”. An error message pops up saying “All row names are different, nothing to process.” The same for “Average rows with identical names.”
This makes me think that it’s trying to average the various scouts within each subject, rather than the same scouts across the subjects. But I don’t know how to get the latter.
Indeed, the process “Average rows” is for averaging all the signals in one single file (ie. average all the scouts signals together for one subject).
Use the process “Average > Average files” instead.
I also have a question about scout averaging. In the brainstorm tutorials, I found several times that it is unwise to average over several subjects with MEG data. Due to the fact that the subjects can move and that they differ in head shape. I was wondering if scout averaging can overcome this problem. So instead of averaging first over the subject and then creating the scouts, first create the scout and then average over the scouts. In this way, you extracted (in my case) the thalamus activity for every subject first.
In the Brainstorm tutorial, I read that this costs more computing time, but is this the only disadvantage and do you solve abovementioned problem?
Yes, pooling source measures (whole-brain or from scouts) across participants is one way to account for different head positions under the MEG helmet. Ideally you'd want to use their individual anatomy from MRI, and if not available, a Brainstorm MRI template that you can warp to digitized scalp points (if available). At the very least, the same MRI template for all participants will be adequate for most studies.
The other option, illustrated in the group analysis tutorials, is to project the individual source maps on a template brain: Tutorials/CoregisterSubjects - Brainstorm
In the Brainstorm tutorial, I read that this costs more computing time, but is this the only disadvantage and do you solve abovementioned problem?
Where did you read that?
It is actually the opposite: It is much lighter computationally to work with the time series of a few scouts rather than the full 15000 sources (or 45000 for unconstrained orientations).