MEG visual tutorial: Single subject

Authors: Francois Tadel, Elizabeth Bock.

The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "Multimodal, Multisubject data fusion". The data processed here consists in simultaneous MEG/EEG recordings of 19 subjects performing simple visual task on a large number of famous, unfamiliar and scrambled faces.

The analysis is split in two tutorial pages: the present tutorial describes the detailed analysis of one single subject and another one that describes the batch processing and group analysis of the 19 subjects.

Note that the operations used here are not detailed, the goal of this tutorial is not to teach Brainstorm to a new inexperienced user. For in depth explanations of the interface and the theory, please refer to the introduction tutorials.

License

This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data:
Wakeman DG, Henson RN, A multi-subject, multi-modal human neuroimaging dataset, Scientific Data (2015)

Any questions, please contact: rik.henson@mrc-cbu.cam.ac.uk

Presentation of the experiment

Experiment

MEG acquisition

Subject anatomy

Download and installation

Import the anatomy

This page explains how to import and process subject #002 only. Subject #001 will be later excluded from the EEG group analysis because the position of the electrodes is incorrect, so it was not the best example.

Access the recordings

We need to attach the continuous .fif files containing the recordings to the database.

Channel classification

A few non-EEG channels are mixed in with the EEG channels, we need to change this before applying any operation on the EEG channels.

MRI registration

At this point, the registration MEG/MRI is based only the three fiducial points NAS/LPA/RPA. All the anatomical scans were anonymized (defaced) and for some subjects the nasion could not be defined properly. We will try to refine this registration using the additional head points that were digitized (only the points above the nasion).

Import event markers

We need to read the stimulus markers from the STI channels. The following tasks can be done in an interactive way with menus in the Record tab, as in the introduction tutorials. We will here illustrate how to do this with the pipeline editor, it will be easier to batch it for all the runs and all the subjects.

Pre-processing

Spectral evaluation

Remove line noise

EEG reference and bad channels

Artifact correction with SSP

Heartbeats: Detection

Heartbeats: SSP

Additional bad segments

SQUID jumps

MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps (more information). These sharp steps followed by a change of baseline value are usually easy to identify visually but more complicated to detect automatically.

The process "Detect other artifacts" usually detects most of them in the category "1-7Hz". If you observe that some are skipped, you can try re-running it with a high sensitivity. It is important to review all the sensors and all the time in each recording to be sure these events are marked as bad segments.

Epoching and averaging

Import epochs

Average by run

Review EEG ERP

Noise covariance

The minimum norm model we will use next to estimate the source activity can be improved by modeling the the noise contaminating the data. The section shows how to estimate the noise covariance in different ways for EEG and MEG. For the MEG recordings we will use the empty room measurements we have, and for the EEG we will compute it from the pre-stimulus baselines we have in all the imported epochs.

MEG: Empty room recordings

EEG: Pre-stimulus baseline

Source estimation

BEM layers

We will need to compute a BEM forward model to estimate the brain sources from the EEG recordings. For this, we will need some layers defining the separation between the different tissues of the head (scalp, inner skull, outer skull).

Forward model: EEG and MEG

Inverse model: Minimum norm estimates

Scripting

We have now all the files we need for the group analysis from the current run. We need to repeat the same operations for all the runs and all the subjects. Some of these steps are fully automatic and take a lot of time (filtering, computing the forward model) and can be executed from a script for all the subjects at once.

However, we recommend you always review manually some of the pre-processing steps (selection of the bad segments and bad channels, SSP/ICA components). Do not trust blindly any fully automated cleaning procedure.

For the strict reproducibility of this analysis, we provide a script that processes all the 19 subjects: brainstorm3/toolbox/script/tutorial_visual_single.m

Could not open file.


You should note that this is not the result of a fully automated procedure. The bad channels were identified manually and are defined for each run in the script. The bad segments were detected automatically, confirmed manually for each run and saved in external files distributed with the Brainstorm package sample_group_freesurfer.zip (sample_group/brainstorm/bad_segments/*.mat).

Bad subjects

After evaluating the results for the 19 subjects, 3 subjects were excluded from the analysis:








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/VisualSingle (last edited 2016-06-29 22:45:52 by FrancoisTadel)