MEG visual tutorial: Single subject (original)

Authors: Francois Tadel, Elizabeth Bock.

The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "Multimodal, Multisubject data fusion". The data processed here consists of simultaneous MEG/EEG recordings from 19 participants performing a simple visual recognition task from presentations of famous, unfamiliar and scrambled faces.

The analysis is split in two tutorial pages: the present tutorial describes the detailed analysis of one single subject; the second tutorial describes batch processing and group analysis of all 19 participants.

Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the introduction tutorials.

WARNING: This tutorial corresponds to the first version of the openfmri dataset described below. A new tutorial is available here to handle the BIDS version of the dataset.

License

This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data:
Wakeman DG, Henson RN, A multi-subject, multi-modal human neuroimaging dataset, Scientific Data (2015)

Any questions regarding the data, please contact: rik.henson@mrc-cbu.cam.ac.uk

Presentation of the experiment

Experiment

MEG acquisition

Subject anatomy

Download and installation

Import the anatomy

This page explains how to import and process subject #002 only. Subject #001 was not the best example because it will be later excluded from the EEG group analysis (incorrect electrodes positions).

Access the recordings

We need to attach the continuous .fif files containing the recordings to the database.

Channel classification

A few non-EEG channels are mixed in with the EEG channels, we need to change this before applying any operation on the EEG channels.

MRI registration

At this point, the registration MEG/MRI is based only on the three anatomical landmarks NAS/LPA/RPA. All the MRI scans were anonymized (defaced) and for some subjects the nasion could not be defined properly. We will try to refine this registration using the additional head points that were digitized (only the points above the nasion).

Read stimulus triggers

We need to read the stimulus markers from the STI channels. The following tasks can be done in an interactive way with menus in the Record tab, as in the introduction tutorials. We will illustrate here how to do this with the pipeline editor, it will be easier to batch it for all the runs and all the subjects.

Pre-processing

Spectral evaluation

Remove line noise

EEG reference and bad channels

Artifact detection

Heartbeats: Detection

Eye blinks: Detection

Heartbeats: Correction with SSP

Additional bad segments

SQUID jumps

MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps (more information). These sharp steps followed by a change of baseline value are easy to identify visually but more complicated to detect automatically.

The process "Detect other artifacts" usually detects most of them in the category "1-7Hz". If you observe that some are skipped, you can try re-running it with a higher sensitivity. It is important to review all the sensors and all the time in each run to be sure these events are marked as bad segments.

Epoching and averaging

Import epochs

Average by run

Review EEG ERP

Source estimation

MEG noise covariance: Empty room recordings

The minimum norm model we will use next to estimate the source activity can be improved by modeling the the noise contaminating the data. The section shows how to estimate the noise covariance in different ways for EEG and MEG. For the MEG recordings we will use the empty room measurements we have, and for the EEG we will compute it from the pre-stimulus baselines we have in all the imported epochs.

EEG noise covariance: Pre-stimulus baseline

BEM layers

We will compute a BEM forward model to estimate the brain sources from the EEG recordings. For this, we need some layers defining the separation between the different tissues of the head (scalp, inner skull, outer skull).

Forward model: EEG and MEG

Inverse model: Minimum norm estimates

Time-frequency analysis

We will compute the time-frequency decomposition of each trial using Morlet wavelets, and average the power of the Morlet coefficients for each condition and each run separately. We will restrict the computation to the MEG magnetometers and the EEG channels to limit the computation time and disk usage.

Scripting

We have now all the files we need for the group analysis (next tutorial). We need to repeat the same operations for all the runs and all the subjects. Some of these steps are fully automatic and take a lot of time (filtering, computing the forward model), they should be executed from a script.

However, we recommend you always review manually some of the pre-processing steps (selection of the bad segments and bad channels, SSP/ICA components). Do not trust blindly any fully automated cleaning procedure.

For the strict reproducibility of this analysis, we provide a script that processes all the 19 subjects: brainstorm3/toolbox/script/tutorial_visual_single_orig.m (execution time: 10-30 hours)
Report for the first subject: report_TutorialVisual_sub001.html

Could not open file.


You should note that this is not the result of a fully automated procedure. The bad channels were identified manually and are defined for each run in the script. The bad segments were detected automatically, confirmed manually for each run and saved in external files distributed with the Brainstorm package sample_group_freesurfer.zip (sample_group/brainstorm/bad_segments/*.mat).

All the process calls (bst_process) were generated automatically using with the script generator (menu Generate .m script in the pipeline editor). Everything else was added manually (loops, bad channels, file copies).

Bad subjects

After evaluating the results for the 19 subjects, 3 subjects were excluded from the analysis:

Make sure you don't include these subjects in the group analysis.








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/VisualSingleOrig (last edited 2018-11-26 21:52:35 by SylvainBaillet)