MEG visual tutorial: Group analysis (BIDS)


Authors: Francois Tadel, Elizabeth Bock.

The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "Multimodal, Multisubject data fusion". It is part of a collective effort to document and standardize MEG/EEG group analysis, see Frontier's research topic: From raw MEG/EEG to publication: how to perform MEG/EEG group analysis with free academic software.

The data processed here consists in simultaneous MEG/EEG recordings of 16 subjects performing a simple visual task on a large number of famous, unfamiliar and scrambled faces. This tutorial follows another page that explains how to process one single subject in details.


This dataset was obtained from the OpenfMRI project (, accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data:
Wakeman DG, Henson RN, A multi-subject, multi-modal human neuroimaging dataset, Scientific Data (2015)

Any questions, please contact:

Download and installation [TODO] [Update screen captures]

First, make sure you have enough space on your hard drive, at least 40Gb:

You can follow this tutorial after processing the recordings for the 16 good subjects (6 runs per subject) as illustrated in the single subject tutorial. Otherwise, we provide a Brainstorm protocol that includes all the imported data, downsampled at 275Hz:

The database you need in order to follow this tutorial should contain the following:

Overview of the analysis

Coregistration of the acquisition runs

For each subject, all the runs have been registered to a common head position with MaxFilter. To verify this, select all the channel files within one subject, right-click > Display sensors > MEG (all). The surfaces representing the MEG helmet are perfectly overlapping for all the runs. When the runs are not aligned, it looks like this.

This means we can safely average or compare the MEG sensor values across runs within one subject. However, it is not reliable to average MEG recordings across subjects, because of the anatomical differences between subjects.

This doesn't mean we can estimate the sources only once per subject. We have computed different SSP projectors and selected different bad channels for each acquisition run. To be able to use this information efficiently we should estimate the sources for each run separately, then average the sources across runs.

The forward model is the same for all the runs within one subject, therefore it can be computed for the first run and copied to all the other runs.


The objectives for this tutorial are to reproduce the analysis presented in the following documents:

Summary of the results we will compute in this tutorial:

Expected effects:

The methodology that we will follow for computing the averages and the other statistics is described in the tutorial "Workflows".

Subject averages: Famous, Unfamiliar, Scrambled

We will start by computing the subject-level averages for all the data types we have: sensor-level recordings, source maps and time-frequency results. We will use a weighted average to group the results we have for each run (weighted by the number of good trials). We want to compute the averages for each experimental condition separately (famous, unfamiliar, scrambled).

Since the files are already selected in the interface, we will also compute a grand average across subject. In this context, we will consider that each subject has the same weight in the grand average (option "weighted" not selected). Note that it is not accurate to average MEG recordings across subjects, but we can do it just to get a general idea of the group effects (more information).




Subject averages: Faces

One of the contrast we want to study is faces (Famous and Unfamiliar) vs non-faces (Scrambled). We need to re-average the Famous and Unfamiliar averages together.




Subject averages: Within-subject differences

To detect correctly the differences between two conditions at the source level, we need to estimate the differences of the conditions for each subject, and then normalize the difference (see tutorial Workflows).

Sources: Faces - Scrambled

Sources: Famous - Unfamiliar

Subject averages: Filter and normalize

Before comparing the averages across subjects we are going to low-pass filter the signals below 32Hz (to smooth possible latency differences between subjects) and normalize the source and time-frequency values with respect with a baseline (see tutorial Workflows).




Subject averages: Screen captures

Now we have all the measures ready to be compared across subjects: MEG/EEG, sources, time-frequency. Let's take a few screen captures to make sure the primary visual response look good for all the subjects. The screen captures below represent from left to right:

















Group analysis: MEG/EEG

Grand averages

We have already computed the group averages for all the conditions. We will not look much further at these results as we are more interested in the contrasts between conditions. Below are screen captures for all group averages (top=MEG MAG, bottom=EEG). Topography at [50,100,150,200,250,300]ms.

Faces: Famous / Unfamiliar

gavg_data_famous.gif gavg_data_unfamiliar.gif


Faces - Scrambled: Differences of averages

We could compute the contrasts directly from the grand averages, but we will do it from the subject averages because it will be the same file selection for the statistics.

Faces - Scrambled: Significance testing

We have computed the amplitude of the difference between the two conditions, and we will try now to estimate a p-value indicating the significance of each of these values. In all the screen captures below: top=MEG MAG, bottom=EEG.

Famous - Unfamiliar: Differences of averages

Famous - Unfamiliar: Significance testing

Group analysis: Sources

Project sources on template

The sources were estimated on the individual anatomy of each subject, the resulting cortical source maps cannot be averaged directly. We need first to re-interpolate all the individual results on a common template (the ICBM152 brain, available in the "default anatomy" folder of the protocol). We also need to extract the absolute values for these source maps: the sign of the minimum norm maps are relative to the orientation of the current with respect to the surface normal, which can vary between subjects.

Spatial smoothing

The source maps estimated with constrained orientations can show very focal activity: two adjacent vertices may have very different normals, and therefore very different current values. When averaging multiple subjects, the peaks of activity may not align very well across subjects. Smoothing spatially the source maps may help obtaining better group results.

MEG: mean(|Faces-Scrambled|)

MEG: Chi2-test |Faces-Scrambled|=0

MEG: Chi2-test log(|Faces-Scrambled|)=0

MEG: mean(|Faces|)-mean(|Scrambled|)

MEG: Student t-test |Faces|=|Scrambled|

EEG: Faces-Scrambled


The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: brainstorm3/toolbox/script/tutorial_visual_group.m

Execution reports: Subject averages, MEG/EEG group results, Source group results

Could not open file.

Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):

Tutorials/VisualGroup (last edited 2017-12-20 14:24:22 by FrancoisTadel)