Corticomuscular coherence (MEG)

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.

Corticomuscular coherence measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "Analysis of corticomuscular coherence" by FieldTrip.

Background

Coherence measures the linear relationship between two signals in the frequency domain. Previous studies (Conway et al., 1995, Kilner et al., 2000) have reported cortico-muscular coherence effects in the 15–30 Hz range during maintained voluntary contractions.

IMAGE OF EXPERIMENT, SIGNALS and COHERENCE

Dataset description

The dataset comprises recordings from MEG (151-channel CTF MEG system) and bipolar EMG (from left and right extensor carpi radialis longus muscles) from one participant who was tasked to lift their hand and exert a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either the left or right wrist. EOG signals were also recorded, which will be useful for detection and attenuation of ocular artifacts. We will analyze the data from the left-wrist trials in the present tutorial. Replicating the pipeline with right-wrist data is a good exercise to do next!

Download and installation

The next sections describe how to import the participant's anatomical data, review raw data, manage event markers, pre-process EMG and MEG signals, epoch and import recordings for further analyzes, with a focus on computing coherence at the sensor (scalp) and brain map (sources) levels.

Importing and processing anatomy data

We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG head model (aka "forward model"). Here, we will perform MRI segmentation with CAT12, this process takes between 30 to 60 minutes.

Once finished, multiple atlases or anatomical parcellations (ICON) will appear in the dataset tree alongside with surfaces for the head (head mask), white matter, cortex (pial envelope) and the midpoint between these last two. The default surfaces are indicated in green. You can display the surfaces by double-clicking on these new nodes. For further information on the anatomy files see the Display the anatomy tutorial.

As part of the MRI segmentation pipeline with CAT12, the anatomy data was normalized in the MNI space, and several anatomical parcellations were computed. These parcellations can be used to create volume and surface scouts, which will be used later in this tutorial to perform the coherence analysis in the source level.

Additional MNI parcellation templates to define anatomical regions of the brain can be used in Brainstorm for MNI-normalized MRI anatomy. See MNI parcellations

Review the MEG and EMG recordings

A new folder is now created in Brainstorm's database explorer and contains:

Display MEG helmet and sensors

Reviewing continuous recordings

Event markers

The colored dots above the data time series indicate event markers (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named Stim. To add these markers, these events need to be decoded as follows:

This procedure creates new event markers now shown in the Events section of the tab. along with previous event categories. In this tutorial, we will only use events U1 through U25, which correspond to how each of the 25 left-wrist trials had been encoded in the study. We will now delete other events of no interest, and merge the left trial events under a single event category, for convenience.

These events correspond to the beginning of 10-s trials of left-wrist movements. We will compute coherence over 1-s epochs over the first 8 s of each trial. To that purpose, we will now create extra events to define these epochs.

Pre-process

In this tutorial, we will analyze only the Left trials (left-wrist extensions). In the following sections, we will process only the first 330 s of the recordings, where the left-wrist trials were performed.

Another idiosyncrasy of the present dataset is that the CTF MEG data were saved without the desired 3-rd order gradient compensation for optimal denoising. We will now apply this compensation as follows:

This process creates the SubjectCMC_clean folder that contains a copy of the channel file and a link to the raw file Raw | clean, which points to the original data and to the fact that the 3-rd order gradient compensation will be applied. Brainstorm does not create a physical copy of the actual, large dataset at this stage.

Removal of power line artifacts

We will start with identifying the spectral components of power line contamination of MEG and EMG recordings.

A new raw folder named SubjectCMC_clean_notch is created. Estimate the PSD of these signals to appreciate the effect of the notch filters applied. As above, please remember to indicate a Time window restricted from 0 to 330 s in the options of the PSD process.

EMG pre-processing

Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying.

Two new folders SubjectCMC_clean_notch_high and SubjectCMC_clean_notch_high_abs are added to Brainstorm's database explorer. We can now safely delete folders that are not needed anymore:

MEG pre-processing

We need to remove more artifacts from the MEG traces via the:

  1. Detection and removal of stereotypical artifacts with SSP

  2. Detection of noisy (bad) data segments.

Detection and removal of artifacts with SSP (Signal Space Projection)

Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contaminationn of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the detection and removal of artifacts with SSP. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks.

Detection of "bad" data segments:

Here we will use the automatic detection of artifacts to identify data segments contaminated by e.g., large eye and head movements and muscle contractions.

We encourage users to review and validate the segments marked using this procedure. In the present case, the segments detected as bad clearly point at contaminated MEG data segments, which we will now label these as "bad".

Importing data epochs

At this point we are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. We refer to these segments as epochs or trials. As mentioned previously, we will focus on the Left (wrist) category of events.

A new folder SubjectCMC_clean_notch_high_abs is created for Subject01. It contains a copy of the channel file from the original raw file, and individual trials tagged as Left in a new trial group. Expand the trial group and note there are trials marked with a question mark in a red circle (ICON). These indicate trials that occurred in the bad segments identified in the previous section. All the bad trials are automatically ignored for further processing, whenever dropped into the Process1 and Process2 tabs.

Coherence estimation (sensor level)

We will now compute the magnitude square coherence (MSC) between the left EMG signal and each of the MEG sensor data.

We can now average magnitude of the MSC across a frequency band of interest (15-20 Hz):

The resulting file mscohere(0.6Hz,555win): EMGlft | tfbands has only one MSC value for each sensor (the MSC average in the 15-20 Hz band). You may visualize the topography of this MSC statistics via 3 possible representations: 2D Sensor cap, 2D Sensor cap and 2D Disk, which are all accessible via a right-click over the MSC node. We clicked on sensor MRC21 below; it is shown in red.

We can observe higher MSC values between the EMG signal and MEG sensor signals over the contralateral set of central sensors in the beta band. Unfortunately, sensor-level connectivity present the disadvantages of: not being interpretable, and being subject to spurious results due to volume conduction.

In the next section we will compute coherence in the source level. To do this, we first need to estimate the sources time series from the sensor data.

MEG source modelling

We will perform source modelling using a distributed model approach for two possible source spaces: the cortex surface and the entire MRI volume. In the surface space, a source grid is made with the source located on the cortical surface obtained from the participant MRI. For the volume space, the source grid consist of elementary sources uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive the sensor noise covariance matrix, and the head model.

Noise covariance

The recommendation for MEG, is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the noise covariance tutorial for more details.

Head model

The head model, aka forward model, accounts for how neural electrical currents (in a source space) produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. As the head model depends on the source space, a distinct head model is required for the surface and volume source spaces. Please refer to the head model tutorial for more in-depth explanations.

Surface

The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined in the default cortex.

The (ICON) Overlapping spheres (surface) head model now appears in the database explorer.

Volume

The Overlapping spheres (volume) head model is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.)

Source estimation

Now that the noise covariance and head model(s) are available, we will perform source estimation, to find the sources that gave origin to the signals registers in the sensors. From the diverse source estimation methods available in Brainstorm, in this tutorial the minimum-norm imaging method is used. The minimum-norm method estimates the linear combination of the current at each point in the source grid that explains the recorded sensor signals favouring minimum energy (L2-norm) solutions. As result, a large matrix called the imaging kernel is obtained. By multiplying the imaging kernel with the sensor data, it is possible to obtain the estimates of brain source time series. A different imaging kernel is derived for each of the head models we have produced above: surface and volume. See the source estimation tutorial for more details.

At each point in the source grid, the current dipole may point arbitrarily in direction in a 3D space.
Only for surface grids, the dipole orientation can be fixed to be normal to the cortical surface, this approach is based on anatomical observations of the brain cortex. The result is then in a smaller model that is faster to compute and display. As discussion on constrained vs unconstrained sources is presented here.

Surface

Constrained

Unconstrained

Volume)

Coherence estimation (source level)

Surface

Volume

Comparison

The results. Compare the 3, and the maximum point (How to find this?) Note: Unconstrained sources and volume model, the sources are 3 dimensional. Coherence is computed between the reference signal (EMGlft) and each dimension, to plot the value of coherence is the flatted as PROCEDURE

As alternative we could flatten first the unconstrained sources, and the compute coherence, however, this is not the best approach. Show the results for the PCA.

Here we can add the results and recommendations.

As we mentioned, there is a coherence spectrum for each of the 15k sources. This is not practical. For that reason often the analysis is performed with the use of ROIs. The definition of ROIs is a current problem that is not solved here.

Coherence estimation (Scout level)

Here we will be working only with the surface unconstrained data. We will use the xxx atlas for our calculations

Place, EMGlft and sources, selected scouts, and after and before. Explain the difference between both metrics, the advantages regarding processing. If possible add diagrams

Connectivity NxN (connectome)

Here we want to write about performing NxN connectivity with the scouts. This should not be done with sources as it leads to a very big matrix that will not fit in the RAM

Also we need to define several parameters in our experiments: - source estimation method - source estimation surface or volume - if surface constrained or unconstrained - How to select the scouts - How to perform the aggregation across scouts (before or after) - Which function to use for the aggregation

All those are open questions that are not addressed in this tutorial nor in the literature.





MEG source imaging

We will perform source modelling using a distributed model approach for two possible source maps: the cortex surface and the entire MRI volume.

For the cortical source model, the locations of the sources are constrained to the cortical surface obtained from the participants anatomical MRI, as imported above. For MRI volume approach, elementary sources are uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive a head model and estimate the noise covariance of the recordings. Note that a distinct head model is required for each source space tested here (cortical vs. volume).

Head modelling

The head model accounts for how neural electrical currents produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. Please refer to the head model tutorial for more in-depth explanations.

Cortical surface

The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined when when the subject anatomy was imported.

The (ICON) Overlapping spheres (surface) head model now appears in the database explorer.

MRI volume source model

The Overlapping spheres (volume) node is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.)

Noise covariance

Basic noise statistics can be extracted (as recommended) from empty-room recordings. However, these latter are not available from the present tutorial data. However, we will assume that data segments prior to wrist extension can be used as proxies for MEG noise covariance. See the noise covariance tutorial for more details.

Source estimation

Now that the head model(s) and noise covariance are available, we will use the minimum-norm imaging approach to solve the so-called inverse problem of source estimation. This will produce a large matrix called the imaging kernel, that will be multiplied to the sensor data, to produce estimates of brain source time series. A different imaging kernel is derived for each of the head models we have produced above: surface and volume. See the source estimation tutorial for more details.

Cortical surface

The inversion kernel (ICON) MN: MEG (surface)(Unconstr) 2018 is now available in the database explorer.

MRI volume

The imaging kernel (ICON) MN: MEG (volume)(Unconstr) 2018 is now also available in the database explorer.

Note that now each trial is associated with two source link (ICON) nodes. One corresponds to the MN: MEG (surface)(Unconstr) 2018 kernel and the other to the MN: MEG (volume)(Unconstr) 2018 kernel.

Defining "scouts" (regions of interest)

To contain computational cost, we will now restrict the computation of coherence with the left EMG signal to selected MEG source activity. We will define regions of interest, whic are known as scouts in Brainstorm. They can be defined as surfaceor volume scouts.

Surface scouts

[TODO] A note, that the definition of scouts is far from perfect, but can give us a good idea of the surface projections of the MNI parcellations (described in the importing anatomy section).

Volume scouts

Estimating source-wise coherence

Surface scouts

We will use the search features of the Brainstorm database to select the source maps we want to include in the EMG-cortex coherence estimation. We will first select files related to the MN: MEG (surface)(Unconstr) 2018 imaging kernel.

After pressing Search, a new tab called (surface) appears in the database explorer. This new tab contains the data filtered through the search query above.

Open the Pipeline editor:

The large peak at 14.65 Hz in the coherence spectra corresponds to the Precentral R scout, which encompasses the right primary motor cortex, as expected.

Volume scouts

We will repeat the same approach Similar for volume scouts. Iniate a file selection query related to the MN: MEG (volume)(Unconstr) 2018 imaging kernel.

Open the Pipeline editor:

Here too, the coherence peak at 14.65 Hz corresponds to the Precentral R scout, which encompasses the right primary motor cortex, as expected.

Advanced

Brain-wise coherence estimation (no scouts)

  • We could downsample the surface and create a more sparse volume grid
  • OR
  • Refactor the coherence process to accumulate the auto- and cross-spectra outside of the function
  • OR

Comparison of cortical surface with FieldTrip and CAT

[TO DISCUSS among authors] This image and GIF are just for reference. They were obtained with all the surface sources using ?FieldTrip and CAT derived surfaces.

Comparison for 14.65 Hz

ft_vs_cat.png

Sweeping from 0 to 80 Hz

ft_vs_cat.gif

Advanced

Script

[TO DO] Once we agree on all the steps above.

Additional documentation

Articles

Tutorials

Forum discussions

[TO DO] Find relevant Forum posts.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/CorticomuscularCoherence (last edited 2022-01-27 00:14:01 by RaymundoCassani)