Corticomuscular coherence (MEG)

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.

Corticomuscular coherence measures a degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "Analysis of corticomuscular coherence" by FieldTrip.

Background

Coherence measures the linear relationship between two signals in the frequency domain. Previous studies (Conway et al., 1995, Kilner et al., 2000) have reported cortico-muscular coherence effects in the 15–30 Hz range during maintained voluntary contractions.

IMAGE OF EXPERIMENT, SIGNALS and COHERENCE

Dataset description

The dataset comprises recordings from MEG (151-channel CTF MEG system) and bipolar EMG (from left and right extensor carpi radialis longus muscles) from one participant who was tasked to lift their hand and exert a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either the left or right wrist. EOG signals were also recorded, which will be useful for detection and attenuation of ocular artifacts. We will analyze the data from the left-wrist trials in the present tutorial. Replicating the pipeline with right-wrist data is a good exercise to do next!

Download and installation

The next sections describe how to import the participant's anatomical data, review raw data, manage event markers, pre-process EMG and MEG signals, epoch and import recordings for further analyzes, with a focus on computing coherence at the sensor (scalp) and brain map (sources) levels.

Importing anatomy data

We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG head model (aka "forward model").

A set of (head and brain) surface files are now available for further head modelling (see below).

You can display the surfaces by double-clicking on these new nodes. There are a couple of issues with the structural data available from this tutorial. Note how the cortex (shown in red) overlaps with the innerskull surface (shown in gray). For this reason, the BEM forward model cannot be derived with OpenMEEG. We will use an analytical approximationusing the overlapping-spheres method, which in MEG has been shown to be adequately accurate for most studies. Note also how the cortex and white surfaces obtained do not register accurately with the cortical surface. We will therefore use a volume-based source estimation approach based on a volumic grid of elementary MEG source across the cerebrum (not a surface-constrained source model). We encourage users to CAT12 or FreeSurfer to obtain surface segmentations of higher quality.

As the imported anatomy data is normalized in the MNI space, it is possible to apply use MNI parcellation templates to define anatomical regions of the brain of the subject. These anatomical regions can be used to create volume and surface scouts, which are convenient when performing the coherence analysis in the source level. Let's add the AAL3 parcellation to the imported data.

Review the MEG and EMG recordings

A new folder is now created in Brainstorm's database explorer and contains:

Display MEG helmet and sensors

Reviewing continuous recordings

Event markers

The colored dots above the data time series indicate event markers (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named Stim. To add these markers, these events need to be decoded as follows:

This procedure creates new event markers now shown in the Events section of the tab. along with previous event categories. In this tutorial, we will only use events U1 through U25, which correspond to how each of the 25 left-wrist trials had been encoded in the study. Thus we will delete the other events, and merge the left trial events.

These events are located at the beginning of the 10 s trials of left wrist movement. In the sections below, we will compute the coherence for 1 s epochs for the first 8 s of the trial, thus we need to create extra events.

Pre-process

In this tutorial will be analyzing only the Left trials. As such, in the following sections we will process only the first 330 s of the recordings.

The CTF MEG recordings in this dataset were not saved with the desired 3rd order compensation. To continue with the pre-processing we need to apply the compensation.

This process create the SubjectCMC_clean folder that contains a copy of the channel file and the raw recordings file Raw | clean, which is de exact copy of the original data but with the CTF compensation applied.

Power line artifacts

Let's start with locating the spectral components and impact of the power line noise in the MEG and EMG signals.

A new raw folder named SubjectCMC_clean_notch will appear in the database explorer. Compute the PSD for the filtered signals to verify effect of the notch filters. Remember to compute for the Time window from 0 to 330 s.

Pre-process EMG

Two of the typical pre-processing steps for EMG consist in high-pass filtering and rectifying.

Once the pipeline ends, the new folders SubjectCMC_clean_notch_high and SubjectCMC_clean_notch_high_abs are added to the database explorer. To avoid any confusion later, we can delete folders that will not be needed.

Pre-process MEG

After applying the notch filter to the MEG signals, we still need to remove other type of artifacts, we will perform:

  1. Detection and removal of artifacts with SSP

  2. Detection of segments with other artifacts

Detection and removal of artifacts with SSP

In the case of stereotypical artifacts, as it is the case of the eye blinks and heartbeats, it is possible to identify their characteristic spatial distribution, and then remove it from MEG signals with methods such as Signal-Space Projection (SSP). For more details, consult the tutorials on detection and removal of artifacts with SSP. The dataset of this tutorial contains an EOG channel but not ECG signal, thus will perform only removal of eye blinks.

Detection of segments with other artifacts

Here we will used automatic detection of artifacts. It aims to identify typical artifacts such as the ones related to eye movements, subject movement and muscle contractions.

While this process can help identify segments with artifacts in the signals, it is still advised to review the selected segments. After a quick browse, it can be noticed that the selected segments indeed correspond to irregularities in the MEG signal. Then, we will label these events are bad.

Importing the recordings

At this point we have finished with the pre-processing of our EMG and MEG recordings. Many operations operations can only be applied to short segments of recordings that have been imported in the database. We refer to these as epochs or trials. Thus, the next step is to import the data taking into account the Left events.

The new folder SubjectCMC_clean_notch_high_abs appears for Subject01. It contains a copy of the channel file in the continuous file, and the Left trial group. By expanding the trial group, we can notice that there are trials marked with an interrogation sign in a red circle (ICON). These bad trials are the ones that were overlapped with the bad segments identified in the previous section. All the bad trials are automatically ignored in the Process1 and Process2 tabs.

Coherence (sensor level)

Once we have imported the trials, we will compute the magnitude square coherence (MSC) between the left EMG signal and the signals from each of the MEG sensors.

* Right-click on the spectrum and select 2D Sensor cap for a spatial visualization of the coherence results, alternatively, the short cut Ctrl-T can be used. Once the 2D Sensor cap is show, the sensor locations can be displayed with right-click then Channels > Display sensors or the shortcut Ctrl-E.

The results above are based in the identification of single peak, as alternative we can average the MSC in a given frequency band (15 - 20 Hz), and observe its topographical distribution.

The resulting file mscohere(0.6Hz,555win): EMGlft | tfbands has only one MSC value for each sensor (the average in the 15-20 Hz band). Thus, it is more useful to display the result in a spatial representation. Brainstorm provides 3 spatial representations: 2D Sensor cap, 2D Sensor cap and 2D Disk, which are accessible with right-click on the MSC node. Sensor MRC21 is selected as reference.

In agreement with the literature, we observe higher MSC values between the EMG signal and the MEG signal for MEG sensors over the contralateral primary motor cortex in the beta band range. In the next sections we will perform source estimation and compute coherence in the source level.

Source analysis

In this tutorial we will perform source modelling using the distributed model approach for two sources spaces: cortex surface and MRI volume. In the first one the location of the sources is constrained to the cortical surface obtained when the subject anatomy was imported. For the second source space, the sources are uniformly distributed in the entire brain volume. Before estimating the brain sources, we need to compute head model and the noise covariance. Note that a head model is required for each source space.

Head model

The head model describes how neural electric currents produce magnetic fields and differences in electrical potentials at external sensors, given the different head tissues. This model is independent of sensor recordings. See the head model tutorial for more details. Each source space, requires its own head model.

Cortex surface

Keep in mind that the number of sources (vertices) in this head model is 10,000, and was defined when when the subject anatomy was imported.

The (ICON) Overlapping spheres (surface) head model will appear in the database explorer.

MRI volume

The Overlapping spheres (volume) node will be added to the database explorer. The green color indicates the default head model for the folder.

Noise covariance

For MEG recordings it is recommended to derive the noise covariance from empty room recordings. However, as we do not have those recordings in the dataset, we can compute the noise covariance from the MEG signals before the trials. See the noise covariance tutorial for more details.

Source estimation

Noe that the head model(s) and noise covariance have been computed, we can use the minimum norm imaging method to solve the inverse problem. The result is a linear inversion kernel, that estimates the source brain activity that gives origin to the observed recordings in the sensors. Note that, an inversion kernel is obtained for each of the head models: surface and volume. See the source estimation tutorial for more details.

Cortex surface

The inversion kernel (ICON) MN: MEG (surface)(Unconstr) 2018 is created, and added to the database explorer.

MRI volume

The inversion kernel (ICON) MN: MEG (volume)(Unconstr) 2018 is created, and added to the database explorer. The green color in the name indicates the current default head model. In addition, note that each trial has now two associated source link (ICON) nodes. One obtained with the MN: MEG (surface)(Unconstr) 2018 kernel and the other obtained with the MN: MEG (volume)(Unconstr) 2018 kernel.

Scouts

From the head model section, we notice that the cortex and volume grid have around 10,000 vertices each, thus as many sources were estimated. As such, it is not practical to compute coherence between the left EMG signal and the signal of each source. A way to address this issue is with the use of regions of interest also known as scouts. Thus, there are surface scouts and volume scouts. Let's define scouts for the different source spaces.

Surface scouts

[TODO] A note, that the definition of scouts is far from perfect, but can give us a good idea of the surface projections of the MNI parcellations (described in the importing anatomy section).

Volume scouts

Coherence (source level)

Coherence in the source level is computed between a sensor signal (EMG) and source signals in the (surface or volume) scouts.

Coherence with surface scouts

To facilitate the selection of the indicated files to compute this coherence, let's search in the database the recordings and the source link files obtained wit the MN: MEG (surface)(Unconstr) 2018 kernel.

By performing the search, a new tab called (surface) appears in the database explorer. This new tab contains the recordings and ONLY the source link for the surface space.

Open the Pipeline editor and add two process:

From the results we can see that the peak at 14.65 Hz corresponds to the Precentral R scout, which encompasses the right primary motor cortex, as expected. These results are inline with the ones in the literature.

Coherence with volume scouts

Similar to coherence with surface scouts, a search is needed to select the recordings and the source link files obtained wit the MN: MEG (volume)(Unconstr) 2018 kernel.

Open the Pipeline editor and add two process:

From the results we can see that the peak at 14.65 Hz corresponds to the Precentral R scout, which encompasses the right primary motor cortex, as expected. These results are inline with the ones in the literature.

Advanced

Coherence with all sources (no scouts)

  • We could downsample the surface and create a more sparse volume grid
  • OR
  • Refactor the coherence process to accumulate the auto- and cross-spectra outside of the function
  • OR

Comparison of cortex surface with FieldTrip and CAT

[TO DISCUSS among authors] This image and GIF are just for reference. They were obtained with all the surface sources using ?FieldTrip and CAT derived surfaces.

Comparison for 14.65 Hz

ft_vs_cat.png

Sweeping from 0 to 80 Hz

ft_vs_cat.gif

Advanced

Script

[TO DO] Once we agree on all the steps above.

Additional documentation

Articles

Tutorials

Forum discussions

[TO DO] Find relevant Forum posts.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/CorticomuscularCoherence (last edited 2021-09-07 22:16:33 by SylvainBaillet)