MEG visual tutorial: Single subject (BIDS)

Authors: Francois Tadel, Elizabeth Bock.

The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "Multimodal, Multisubject data fusion". We use here a recent update of this dataset, reformatted to follow the Brain Imaging Data Structure (BIDS), a standard for neuroimaging data organization. It is part of a collective effort to document and standardize MEG/EEG group analysis, see Frontier's research topic: From raw MEG/EEG to publication: how to perform MEG/EEG group analysis with free academic software.

The data processed here consists of simultaneous MEG/EEG recordings from 16 participants performing a simple visual recognition task from presentations of famous, unfamiliar and scrambled faces. The analysis is split in two tutorial pages: the present tutorial describes the detailed interactive analysis of one single subject; the second tutorial describes batch processing and group analysis of all 16 participants.

Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the introduction tutorials.

License

This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data: Wakeman DG, Henson RN, A multi-subject, multi-modal human neuroimaging dataset, Scientific Data (2015)

Any questions regarding the data, please contact: rik.henson@mrc-cbu.cam.ac.uk

Presentation of the experiment

Experiment

MEG acquisition

Subject anatomy

Download and installation

Import the anatomy

This dataset is formatted following the BIDS-MEG specifications, therefore we could import all the relevant information (MRI, FreeSurfer segmentation, MEG+EEG recordings) in just one click, with the menu File > Load protocol > Import BIDS dataset, as described in the online tutorial MEG resting state & OMEGA database. However, because your own data might not be organized following the BIDS standards, in this tutorial we preferred illustrating all the detailed steps for importing the data rather than the BIDS shortcut. Plus, we will need some additional steps that are not part of the standard import, because of the data anonymization (MRI were defaced and acquisition dates removed).

This page explains how to import and process the first run of subject #01 only. All the other files will have to be processed in the same way.

Access the recordings

We need to attach the continuous .fif files containing the recordings to the database.

Channel classification

A few non-EEG channels are mixed in with the EEG channels, we need to change this before applying any operation on the EEG channels.

MRI registration

At this point, the registration MEG/MRI is based only on the three anatomical landmarks NAS/LPA/RPA, which are not even accurately set (we used default MNI positions). All the MRI scans were anonymized (defaced) so all the head points below the nasion cannot be used. We will try to refine this registration using the additional head points that were digitized above the nasion.

Read stimulus triggers

We need to read the stimulus markers from the STI channels. The following tasks can be done in an interactive way with menus in the Record tab, as in the introduction tutorials. We will illustrate here how to do this with the pipeline editor, it will be easier to batch it for all the runs and all the subjects.

Pre-processing

Spectral evaluation

Remove line noise

EEG reference and bad channels

Artifact detection

Heartbeats: Detection

Eye blinks: Detection

Heartbeats: Correction with SSP

Additional bad segments

SQUID jumps

MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps (more information). These sharp steps followed by a change of baseline value are easy to identify visually but more complicated to detect automatically.

The process "Detect other artifacts" usually detects most of them in the category "1-7Hz". If you observe that some are skipped, you can try re-running it with a higher sensitivity. It is important to review all the sensors and all the time in each run to be sure these events are marked as bad segments.

Epoching and averaging

Import epochs

Average by run

Review EEG ERP

Source estimation

MEG noise covariance: Empty room recordings

The minimum norm model we will use next to estimate the source activity can be improved by modeling the the noise contaminating the data. The introduction tutorials explain how to estimate the noise covariance in different ways for EEG and MEG. For the MEG recordings we will use the empty room measurements we have, and for the EEG we will compute it from the pre-stimulus baselines we have in all the imported epochs.

There are 8 empty room files available in this dataset. For each subject, we will use only one file, the one that was acquired at the closest date. We will now import and process all the empty room recordings simultaneously, even if only one is needed by the current subject. Later, for each subject we will select the most appropriate noise covariance matrix.

EEG noise covariance: Pre-stimulus baseline

BEM layers

We will compute a BEM forward model to estimate the brain sources from the EEG recordings. For this, we need some layers defining the separation between the different tissues of the head (scalp, inner skull, outer skull).

Forward model: EEG and MEG

Inverse model: Minimum norm estimates

Time-frequency analysis

We will compute the time-frequency decomposition of each trial using Morlet wavelets, and average the power of the Morlet coefficients for each condition and each run separately. We will restrict the computation to the MEG magnetometers and the EEG channels to limit the computation time and disk usage.

Scripting

We have now all the files we need for the group analysis (next tutorial). We need to repeat the same operations for all the runs and all the subjects. Some of these steps are fully automatic and take a lot of time (filtering, computing the forward model), they should be executed from a script.

However, we recommend you always review manually some of the pre-processing steps (selection of the bad segments and bad channels, SSP/ICA components). Do not trust blindly any fully automated cleaning procedure.

For the strict reproducibility of this analysis, we provide a script that processes all the 19 subjects: brainstorm3/toolbox/script/tutorial_visual_full_single.m (execution time: 10-30 hours)
Report for the first subject: report_TutorialVisual_sub-01.html

Could not open file.


You should note that this is not the result of a fully automated procedure. The bad channels were identified manually and are defined for each run in the script. The bad segments were detected automatically, confirmed manually for each run and saved in external files, then exported as text ans copied at the end of this script.

All the process calls (bst_process) were generated automatically using with the script generator (menu Generate .m script in the pipeline editor). Everything else was added manually (loops, bad channels, file copies).








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/VisualSingle (last edited 2018-07-25 14:03:57 by FrancoisTadel)