MEG corticomuscular coherence

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Raymundo Cassani

Corticomuscular coherence relates to the synchrony between electrophysiological signals (MEG, EEG or ECoG) recorded from the contralateral motor cortex, and EMG signal from a muscle during voluntary movement. This synchrony has its origin mainly in the descending communication in corticospinal pathways between primary motor cortex (M1) and muscles. This tutorial replicates the processing pipeline and analysis presented in the Analysis of corticomuscular coherence FieldTrip tutorial.

Background

Coherence is a classic method to measure the linear relationship between two signals in the frequency domain. Previous studies (Conway et al., 1995, Kilner et al., 2000) have used coherence to study the relationship between MEG signals from M1 and muscles, and they have shown synchronized activity in the 15–30 Hz range during maintained voluntary contractions.

IMAGE OF EXPERIMENT, SIGNALS and COHERENCE

Dataset description

The dataset is comprised of MEG (151-channel CTF MEG system) and bipolar EMG (from left and right extensor carpi radialis longus muscles) recordings from one subject during an experiment in which the subject had to lift her hand and exert a constant force against a lever. The force was monitored by strain gauges on the lever. The subject performed two blocks of 25 trials in which either the left or the right wrist was extended for about 10 seconds. In addition to the MEG and EMG signals, EOG signal was recorded to assist the removal of ocular artifacts. Only data for the left wrist will be analyzed in this tutorial.

Download and installation

The next sections describe: importing the subject's anatomy, reviewing raw data, managing event markers, pre-processing the EMG and MEG signals, epoching and importing recordings, and computing coherence in the sensor and source level.

Importing anatomy data

Once the MRI has been imported and normalized, we need to segment the head and brain tissues to obtain the surfaces that are needed to generate a realistic head model (or forward model).

As output, there is a set of (head and brain) surface files that will be used for the head model computation.

By displaying the surfaces, we can note that the cortex, which is related to the gray matter (shown in red) overlaps heavily with the innerskull surface (shown in gray), so it cannot be used to compute a BEM forward model using OpenMEEG. However, as we are dealing with MEG signals, we can still compute the forward model with the overlapping-spheres method, and obtain similar results. We can also notice that the cortex and white surfaces obtained with the method above do not register accurately the cortical surface, they can be used for volume-based source estimation, which is based on a volume grid of source points; but they may not be used for surface-based source estimation. Better surfaces can be obtained by doing MRI segmentation with CAT12 or FreeSurfer.

As the imported anatomy data is normalized in the MNI space, it is possible to apply use MNI parcellation templates to define anatomical regions of the brain of the subject. These anatomical regions can be used to create volume and surface scouts, which are convenient when performing the coherence analysis in the source level. Let's add the AAL3 parcellation to the imported data.

Access the recordings

A new folder and its content are now visible in the database explorer:

Display MEG helmet and sensors

Reviewing continuous recordings

Event markers

The colored dots on top of the recordings in the time series figures represent the event markers (or triggers) saved in this dataset. In addition to these events, the start of the either left or right trials is saved in the auxiliary channel named Stim. To add these markers:

This creates new events shown in the Events section of the tab. We are only interested in the events from U1 to U25 that correspond to the 25 left trials. Thus we will delete the other events, and merge the left trial events.

These events are located at the beginning of the 10 s trials of left wrist movement. In the sections below, we will compute the coherence for 1 s epochs for the first 8 s of the trial, thus we need to create extra events.

Pre-process

In this tutorial will be analyzing only the Left trials. As such, in the following sections we will process only the first 330 s of the recordings.

The CTF MEG recordings in this dataset were not saved with the desired 3rd order compensation. To continue with the pre-processing we need to apply the compensation.

This process create the SubjectCMC_clean folder that contains a copy of the channel file and the raw recordings file Raw | clean, which is de exact copy of the original data but with the CTF compensation applied.

Power line artifacts

Let's start with locating the spectral components and impact of the power line noise in the MEG and EMG signals.

A new raw folder named SubjectCMC_clean_notch will appear in the database explorer. Compute the PSD for the filtered signals to verify effect of the notch filters. Remember to compute for the Time window from 0 to 330 s.

Pre-process EMG

Two of the typical pre-processing steps for EMG consist in high-pass filtering and rectifying.

Once the pipeline ends, the new folders SubjectCMC_clean_notch_high and SubjectCMC_clean_notch_high_abs are added to the database explorer. To avoid any confusion later, we can delete folders that will not be needed.

Pre-process MEG

After applying the notch filter to the MEG signals, we still need to remove other type of artifacts, we will perform:

  1. Detection and removal of artifacts with SSP

  2. Detection of segments with other artifacts

Detection and removal of artifacts with SSP

In the case of stereotypical artifacts, as it is the case of the eye blinks and heartbeats, it is possible to identify their characteristic spatial distribtuion, and then remove it from MEG signals with methods such as Signal-Space Projection (SSP). For more details, consult the tutorials on detection and removal of artifacts with SSP. The dataset of this tutorial contains an EOG channel but not ECG signal, thus will perform only removal of eye blinks.

Detection of segments with other artifacts

Here we will used automatic detection of artifacts. It aims to identify typical artifacts such as the ones related to eye movements, subject movement and muscle contractions.

While this process can help identify segments with artifacts in the signals, it is still advised to review the selected segments. After a quick browse, it can be noticed that the selected segments indeed correspond to irregularities in the MEG signal. Then, we will label these events are bad.

Importing the recordings

At this point we have finished with the pre-processing of our EMG and MEG recordings. Many operations operations can only be applied to short segments of recordings that have been imported in the database. We refer to these as epochs or trials. Thus, the next step is to import the data taking into account the Left events.

The new folder SubjectCMC_clean_notch_high_abs appears for Subject01. It contains a copy of the channel file in the continuous file, and the Left trial group. By expanding the trial group, we can notice that there are trials marked with an interrogation sign in a red circle (ICON). These bad trials are the ones that were overlapped with the bad segments identified in the previous section. All the bad trials are automatically ignored in the Process1 and Process2 tabs.

Coherence (sensor level)

Once we have imported the trials, we will compute the magnitude square coherence (MSC) between the left EMG signal and the signals from each of the MEG sensors.

* Right-click on the spectrum and select 2D Sensor cap for a spatial visualization of the coherence results, alternatively, the short cut Ctrl-T can be used. Once the 2D Sensor cap is show, the sensor locations can be displayed with right-click then Channels > Display sensors or the shortcut Ctrl-E.

The results above are based in the identification of single peak, as alternative we can average the MSC in a given frequency band (15 - 20 Hz), and observe its topographical distribution.

The resulting file mscohere(0.6Hz,555win): EMGlft | tfbands has only one MSC value for each sensor (the average in the 15-20 Hz band). Thus, it is more useful to display the result in a spatial representation. Brainstorm provides 3 spatial representations: 2D Sensor cap, 2D Sensor cap and 2D Disk, which are accessible with right-click on the MSC node. Sensor MRC21 is selected as reference.

In agreement with the literature, we observe higher MSC values between the EMG signal and the MEG signal for MEG sensors over the contralateral primary motor cortex in the beta band range. In the next sections we will perform source estimation and compute coherence in the source level.

Source analysis

Before estimating the brain source, we need to compute the head model and the noise covariance.

Head model

The head model describes how neural electric currents produce magnetic fields and differences in electrical potentials at external sensors, given the different head tissues. This model is independent of sensor recordings. See the head model tutorial for more details.

As result will have the Overlapping spheres head model in the database explorer.

Noise covariance

For MEG it is recommended to derive the noise covariance from empty room recordings. However, as we do not have those recordings in the dataset, we can compute the noise covariance from the MEG signals before the trials. See the noise covariance tutorial for more details.

Source estimation

With the help of the head model and the noise covariance, we can solve the inverse problem by computing an inverse kernel that will estimate the brain activity that gives origin to the observed recordings in the sensors. See the source estimation tutorial for more details.

The inversion kernel dSPM-unscaled: MEG(Constr) 2018 was created, and note that the each recordings node has an associated source link.

Coherence (source level)

[TO DISCUSS among authors] Better source localization can be obtained by performing MRI segmentation with CAT12. Although it adds between ~45min of additional processing. We may want to provide the already processed MRI. Thoughts?

From the earlier section importing anatomy data, we can observe that the cortex surface has 10,000 vertices, thus as many sources were estimated. AS it can be seen, it is not practical to compute coherence between the left EMG signal and each source. A way to address this issue is with the use of regions of interest or Scouts.

It is important to note that the coherence will be performed between a sensor signal (EMG) and source signals in the scouts.

Results with FieldTrip MRI segmentation

Results with FieldTrip MRI segmentation

[TO DISCUSS among authors] Same as the previous section but using the surface from CAT, and using DK atlas. res_coh_ab_cat.png

[TO DISCUSS among authors] In addition I barely ran the Coherence (as it took up to 30GB) for all the vertices vs EMG Left for the source estimation using the ?FielfTrip and the CAT12 segmentations

Comparison for 14.65 Hz

ft_vs_cat.png

Sweeping from 0 to 80 Hz

ft_vs_cat.gif

Advanced

Script

[TO DO] Once we agree on all the steps above.

Additional documentation

Articles

Tutorials

Forum discussions

[TO DO] Find relevant Forum posts.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/CorticomuscularCoherence (last edited 2021-09-02 22:05:30 by RaymundoCassani)