MEG visual tutorial: Group analysis

Authors: Francois Tadel, Elizabeth Bock.

The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "Multimodal, Multisubject data fusion". The data processed here consists in simultaneous MEG/EEG recordings of 19 subjects performing simple visual task on a large number of famous, unfamiliar and scrambled faces. This tutorial follows another page that explains how to process one single subject in details.

License

This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data:
Wakeman DG, Henson RN, A multi-subject, multi-modal human neuroimaging dataset, Scientific Data (2015)

Any questions, please contact: rik.henson@mrc-cbu.cam.ac.uk

Download and installation

First, make sure you have enough space on your hard drive, at least 100Gb:

You can follow this tutorial after processing the recordings for the 19 subjects (6 runs per subject) as illustrated in the single subject tutorial. Otherwise, we provide a Brainstorm protocol that includes all the imported data, downsampled at 275Hz:

The database you need in order to follow this tutorial should contain the following:

Overview of the analysis

Coregistration of the acquisition runs

For each subject, all the runs have been registered to a common head position with MaxFilter. To verify this, select all the channel files within one subject, right-click > Display sensors > MEG (all). The surfaces representing the MEG helmet are perfectly overlapping for all the runs. When the runs are not aligned, it looks like this.

This means that we can safely average or compare the MEG sensor values across runs within one subject. However, it is not reliable to average MEG recordings across subjects, because of the anatomical differences between subjects.

This doesn't mean that we can estimate the sources only once per subject. We have computed different SSP projectors and different bad channels for the each acquisition run. To be able to use this information efficiently we should estimate the sources for the trial averages for each run separately, then average the sources across runs.

The forward model is the same for all the runs within one subject, therefore it can be computed for the first run and copied to all the other runs.

Objectives

The objectives for this tutorial are to reproduce the analysis presented in the following documents:

The methodology that we will follow for computing the averages and the other statistics are described in the tutorial "Workflows".

Bad subjects

After evaluating the results for the 19 subjects, 3 subjects were excluded from the analysis:

To ignore these three subjects from the group analysis without having to delete them from the database, is to mark all the recordings they contain as "bad". All the files represented with a red dot (see below) are excluded by the functions that select files from the database and will be therefore excluded from the computation of the averages and the statistics.

Subject and group averages





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/VisualGroup (last edited 2016-07-06 20:12:55 by FrancoisTadel)