Corticomuscular coherence (CTF MEG)

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.

Corticomuscular coherence measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "Analysis of corticomuscular coherence" by FieldTrip.

Dataset description

The dataset is distributed as part of the FieldTrip tutorial Analysis of corticomuscular coherence:

Corticomuscular coherence:

Download and installation

Requirements

Download the dataset

Brainstorm

Importing anatomy

MEG and EMG recordings

MEG-MRI coregistration

Reviewing continuous recordings

Event markers

The colored dots above the data time series indicate event markers (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named Stim. To add these markers, these events need to be decoded as follows:

Pre-processing

In this tutorial, we will analyze only the Left trials (left-wrist extensions). In the following sections, we will process only the first 330 s of the recordings, where the left-wrist trials were performed.

Power line artifacts

EMG pre-processing

Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying.

MEG pre-processing

Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contamination of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the detection and removal of artifacts with SSP. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks.

Detection of "bad" data segments

Here we will use the automatic detection of artifacts to identify data segments contaminated by e.g., large eye and head movements and muscle contractions.

Importing data epochs

We are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. As mentioned previously, we will focus on the Left category of events (left wrist movements). To follow the same pipeline as the FieldTrip tutorial: we will consider 8 seconds of recordings after each trigger (out of the 10s of each trial), and split them in epochs of 1 second. In addition DC offset is removed, only for MEG signals.

pro_import.png

pro_remove_dc.png

Comparison with FieldTrip

The figures below represent the EMG and MRC21 channels (sensor over the left motor-cortex) from the epoch #1.1, in Brainstorm (left) and in the FieldTrip tutorial (right).

bst_ft_trial1.png

Coherence 1xN (sensor level)

Let's compute the magnitude square coherence (MSC) between the left EMG and the MEG channels.

coh_meg_emgleft.png

coh_meg_emgleft2.png

Source estimation

MRI segmentation

In order to estimate the brain sources for these MEG recordings, we first need to reconstruct the cortex surface from the T1 MRI imported at the beginning of this tutorial. For this puropose, we decided to use CAT12 because it is fast (30-60min) and fully integrated with Brainstorm as a plugin.

Head model: Surface

We will perform source modeling using a distributed model approach for two different source spaces: the cortex surface and the entire MRI volume. The forward model, labelled head model in Brainstorm, accounts for how neural electrical currents produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements (more information). As the head model depends on the source space, a distinct head model is required for the surface and volume source spaces: we will compute them both now.

Head model: Volume

The Overlapping spheres (volume) head model is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.)

Noise covariance

The recommendation for MEG, is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the noise covariance tutorial for more details.

Source estimation

Now that the noise covariance and head model(s) are available, we will perform source estimation, to find the sources that gave origin to the signals registered in the sensors. From the diverse source estimation methods available in Brainstorm, in this tutorial the minimum-norm imaging method is used. The minimum-norm method estimates the linear combination of the current at each point in the source grid that explains the recorded sensor signals favouring minimum energy (L2-norm) solutions. As result, a large matrix called the imaging kernel is obtained. By multiplying the imaging kernel with the sensor data, it is possible to obtain the estimates of brain sources time series. A different imaging kernel is derived for each of the head models we have produced above: surface and volume. See the source estimation tutorial for more details.

Each dipole in the source grid may point arbitrarily in any direction in a 3D space.

Only for surface grids, the dipole orientation can be fixed to be normal to the cortical surface, this approach is based on anatomical observations of the brain cortex. The result is then in a smaller model that is faster to compute and display.
A discussion on constrained vs unconstrained sources is presented here.

Surface

Here we will estimate the sources in the surface space for constrained (normal to the cortex) and unconstrained dipole orientations.

pro_sources_srfc.png

pro_sources_srfu.png

The inversion kernels () MN: MEG (surface)(Constr) 2018 and MN: MEG (surface)(Unconstr) 2018 are now available in the database explorer.

Volume

To compute the imaging kernel for the volume source space:

At this point the imaging kernel () MN: MEG (volume)(Unconstr) 2018 is now also available in the database explorer.


Note that each trial is associated with three source link () nodes, that correspond to each of the imaging kernels obtained above.

The source link nodes are not files containing the sources time series, instead, the links indicate Brainstorm to: load the corresponding MEG recordings, load the respective inverse kernel, and multiply the two on the fly to generate the sources time series. This approach saves space on the hard drive.

Coherence 1xN (source level)

Once we have computed the time series for the sources, it is time to compute coherence between the EMG signal and brain sources obtained with each of the imaging kernels.

Let's start with sources from the MN: MEG (surface)(Constr) kernel:

This will create a new tab in the database explorer. This new tab contains only the files that match the search criteria.

Open the Pipeline editor:

pro_coh_srf.png

pro_coh_srf2.png

Results (Surface)

Double-click the 1xN connectivity files for the (surface) source space to show the results on the cortex. If you are not familiar with the options in the cortex figures, check Display: Cortex surface

Find the location and frequency with the highest coherence value.

The highest coherence value is located on the right primary motor cortex (precentral gyrus) around 14.65 Hz for the analysis using constrained and unconstrained sources. Set the amplitude threshold to 0% to see the extension of the high coherence values.

res_coh_srfc.png

res_coh_srfu.png

MSC @ 14.65Hz (surface)(Constr)

MSC @ 14.65Hz (surface)(Unconstr)

We observe that results obtained with constrained and unconstrained sources agree in the location and frequency of the peak coherence. The main difference between these results is that coherence values obtained with unconstrained sources appear smoother, this caused by the maximum aggregation performed across directions, explained at detail in the next section.

Finally, right-click on any of the cortex figures and select Get coordinates. Then click on the right motor cortex with the crosshair cursor that appears. The SCS coordinates will be useful in the next section.
SCS coordinates X:39.2, Y:-22.3 and Z: 113.0

res_get_coordinates.png

Results (Volume)

Double-click the 1xN connectivity file for the (volume) source space. Note that this time the results are shown in the MRI viewer rather than the cortical surface.

res_coh_vol.png

MSC @ 14.65Hz (volume)(Unconstr)

We note that all the results obtained with constrained (surface) and unconstrained (surface and volume) sources agree with each other, and in the location and frequency of the peak coherence. Moreover, they agree with our hypothesis, previous results in the literature [REFS], and the results presented in the FieldTrip tutorial.

Coherence with constrained and unconstrained sources

For constrained sources, each vertex in the source grid is associated with ONE time series, as such, when coherence is computed with the EMG signal (also one time series), the result is ONE coherence spectrum per vertex. In other words, for each frequency bin, there is coherence brain map.

https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_constr.png

In the case of unconstrained sources, each vertex in the grid is associated with THREE time series, each one corresponding to the X, Y and Z directions. Thus, when coherence is computed with the EMG signal (one time series), there are THREE coherence spectra. This complicates its representation in the brain, thus; these THREE coherence spectra need to be flattened into one, resulting in one coherence spectrum per vertex, the maximum across directions is found for each frequency bin for each vertex.

https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_unconstr.png

An alternative approach in the literature, to address the 3-dimensional nature of the unconstrained sources, consists in flattening the vertex X, Y and Z time series before the coherence computation; resulting in a similar case as the constrained sources (REF). Common methods for this flattening include: PCA (only first component is kept), and (Euclidean) norm. This flattening of the time series can be performed in Brainstorm with the process: Sources > Unconstrained to flat map.

https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_flattened.png

  • Flattened sources are saved as full rather than recordings+kernel.
  • We have tested this flattening approach with simulations and the real data (from this tutorial) and we have found decrimental effects on the expected results.

Coherence 1xN (scout level)

So far, we have computed coherence at the source level, thus, a coherence spectrum is computed for each of the 15002 source points. This large dimension hinders later analysis of the results. Therefore, the strategy is to reduce the dimensionality of the source space by using a surface- or volume-parcellation scheme, in Brainstorm jargon this is an atlas that is made of scouts. See the scout tutorial for detail information on atlases and scouts in Brainstorm.

Under this approach, instead of providing one result (coherence spectrum) per source vertex, one result is computed for each scout. When computing coherence (or other connectivity metrics) in the scout level, it is necessary to provide two parameters that define how the data is aggregated per scout:

https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_sct_bef.png

https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_sct_aft.png

As it can be seen, the After option takes longer and used more resources as it computes the coherence spectrum for each vertex in the scouts, and then, the coherence spectra are aggregated.


Let's here compute the coherence using scouts, using mean as scout function alongside with the Before option. We will use the Schaefer 100 parcellation atlas on the results from constrained sources.

Open the Pipeline editor:

pro_coh_srfc_bef_sct.png

pro_coh_srfc_bef_sct2.png

Open the file: mscohere(0.6Hz,555win): EMGlft x 102 scouts, mean before | (surface)(Constr) by double-clicking on it. This time the coherence spectra are not displayed on the cortex, but they are plotted for each scout. Moreover, 1xN connectivity file can be shown as image.

res_coh_srfc_bef_sct.png

res_coh_srfc_bef_sct2.png

Note that for 14.65 Hz, the highest two peaks correspond to the SomMotA_4 R and SomMotA_2 R scouts, both located over the right primary motor cortex.

The choice of the optimal parcellation scheme for the source space is not easy.
The optimal choice is to choose a parcellation based on anatomy, for example the Brodmann parcellation.
In Brainstorm these atlases are imported in Brainstorm as scouts (cortical regions of interest), and saved directly in the surface files as explained in this tutorial here.

BRAINSTORM TEAM
Due to the current implementation of the bst_connectivity, the full source map for each trial (185) are loaded in memory, thus replicate the After, only for the (surface)(Constr) option ~30GB of RAM are needed! (Unconstrained take 3 times that).

Coherence NxN (scout level)

In previous sections, we have computed coherence between a reference signal (EMG) and sources (or scouts). However, depending on the experimental setup and hypotheses, we may want to study the brain as network, this is to say, to compute the NxN connectivity. Due to the large number of sources (often several thousands) while in theory it is possible to compute NxN connectivity, it is not practical, as such a common approach is to use of scouts to reflect the functional connections among its cell assemblies, resulting in a connectome.

Diverse studies have shown some overlap between connectomes derived from electrophysiological signals (MEG/EEG) and the ones derived from fMRI, which is reasonably expected as both are the result of the undergoing biological system. However, due to its nature, the electrophysiological connectomes provide unique insights on how functional communication is implemented in the brain (Sadaghiani et al., 2022).

Similar to the coherence 1xN with scouts, we need to define the scout function and when it is going to be applied. Note that computing coherence between two unconstrained leads to 9 (3x3) coherence spectra.

Scout function before: https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_nxn_coh_sct_bef.png

Scout function after: https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_nxn_coh_sct_aft.png

As in the coherence 1xN with scouts case, the After option takes longer and uses way more resources as it computes the coherence spectrum for each vertex in all the scouts, leading to arrays as big as [sources, sources, frequency bins]!


The result is a NxN connectivity file (), It contains (Scouts x Scouts) coherence spectra.

diagram_nxn_coh_sct_end.png

Such a visualization is not practical, thus the connectivity graph or the adjacent matrix are displayed for each frequency bin in the coherence spectra. For more details, see the connectivity graph tutorial.

A coherence NxN example can be found in the main connectivituy tutorial.

Advanced

Scripting

Additional documentation

Articles

Tutorials

Forum discussions

[TODO] Find relevant Forum posts.





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/CorticomuscularCoherence (last edited 2022-03-11 17:10:33 by FrancoisTadel)