42473
Comment:
|
47561
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
<<HTML(<style>.backtick {font-size: 16px; font-weight: bold;}</style>)>><<HTML(<style>abbr {font-weight: bold;}</style>)>> <<HTML(<style>em strong {font-weight: bold; font-style: normal; padding: 2px; border-radius: 5px; background-color: #DDD; color: #111;}</style>)>> = MEG corticomuscular coherence = |
<<HTML(<style>.backtick {font-size: 16px;}</style>)>><<HTML(<style>abbr {font-weight: bold;}</style>)>> <<HTML(<style>em strong {font-weight: normal; font-style: normal; padding: 2px; border-radius: 5px; background-color: #EEE; color: #111;}</style>)>> = Corticomuscular coherence (MEG) = |
Line 8: | Line 8: |
[[https://en.wikipedia.org/wiki/Corticomuscular_coherence|Corticomuscular coherence]] measures a degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "[[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]]" by FieldTrip. | [[https://en.wikipedia.org/wiki/Corticomuscular_coherence|Corticomuscular coherence]] measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "[[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]]" by FieldTrip. |
Line 21: | Line 21: |
* '''Requirements''': Please make sure you have completed the [[Tutorials|get-started tutorials]] and that you have a working copy of Brainstorm installed on your computer. | * '''Requirements''': Please make sure you have completed the [[Tutorials|get-started tutorials]] and that you have a working copy of Brainstorm installed on your computer. In addition, you need to [[Tutorials/SegCAT12#Install_CAT12|install CAT12 plugin]] in Brainstorm. [[http://www.neuro.uni-jena.de/cat/index.html|CAT12]] will be used for MRI segmentation. |
Line 31: | Line 31: |
== Importing anatomy data == * Right-click on the newly created '''TutorialCMC''' node in your Brainstorm data tree then '''''New subject > Subject01'''''.<<BR>>Keep the default options defined for the study (aka "protocol" in Brainstorm's jargon). * Switch to the '''Anatomy''' view of the study. |
== Importing and processing anatomy data == * Right-click on the newly created '''TutorialCMC''' node in your Brainstorm data tree then: '''''New subject > Subject01'''''.<<BR>>Keep the default options defined for the study (aka "protocol" in Brainstorm's jargon). * Switch to the '''Anatomy''' view of the protocol. |
Line 39: | Line 39: |
* This will open the '''MRI viewer''' showing the coronal, sagittal and axial views of the MRI. In addition, [[CoordinateSystems|three anatomical fiducials]]: left and right pre-auricular points (LPA and RPA), and nasion (NAS) are automatically identifed. These fiducials are located near the left/right ears and just above the nose respectively. Click on '''Save'''. {{{#!wiki comment |
|
Line 40: | Line 43: |
{{{#!wiki comment {{attachment:viewer_mni_norm.png}} }}} . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=viewer_mni_norm.png|{{attachment:viewer_mni_norm.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=viewer_mni_norm.png}}]] We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG [[Tutorials/HeadModel|head model (aka "forward model")]]. * Right-click on the '''SubjectCMC''' MRI node, then '''''MRI segmentation > FieldTrip: Tissues, BEM surfaces'''''. * Select all the tissues ('''scalp''', '''skull''', '''csf''', '''gray''' and '''white'''). * Click '''OK'''. * For the option '''Generate surface meshes''' select '''No'''. * After the segmentation is complete, a '''tissues''' node will be shown in the tree. * Rick-click on the '''tissues''' node and select '''''Generate triangular meshes.''''' * Select the 5 layers to mesh. * Use the default parameters: * '''number of vertices''': `10,000` * '''erode factor''': `0` * '''fill holes factor''': `2` A set of (head and brain) surface files are now available for further head modelling (see below). |
}}} . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=mri_viewer.png|{{attachment:mri_viewer.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=mri_viewer.png}}]] We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG [[Tutorials/HeadModel|head model (aka "forward model")]]. Here, we will perform [[Tutorials/SegCAT12|MRI segmentation with CAT12]], this process takes between 30 to 60 minutes. * Right-click on the '''SubjectCMC''' MRI node, then '''''MRI segmentation > CAT12: Cortex, atlases, tissues'''''. This will prompt a series of windows to set the parameters for the MRI segmentation, use the following parameters: * '''Number of vertices on the cortex surface''' use `15000` * '''Compute anatomical parcellations?''' select `Yes` * '''Compute cortical maps''' select `Yes` . {{attachment:cat12.png||width="100%"}} Once finished, multiple atlases or anatomical parcellations (ICON) will appear in the dataset tree alongside with surfaces for the head (head mask), white matter, cortex (pial envelope) and the midpoint between these last two. The default surfaces are indicated in green. You can display the surfaces by double-clicking on these new nodes. For further information on the anatomy files see the [[Tutorials/ExploreAnatomy|Display the anatomy tutorial]]. |
Line 64: | Line 59: |
You can display the surfaces by double-clicking on these new nodes. There are a couple of issues with the structural data available from this tutorial. Note how the '''cortex''' (shown in red) overlaps with the '''innerskull''' surface (shown in gray). For this reason, the [[Tutorials/TutBem|BEM forward model cannot be derived with OpenMEEG]]. We will use an analytical approximationusing the [[Tutorials/HeadModel#Forward_model|overlapping-spheres method]], which in MEG has been shown to be adequately accurate for most studies. Note also how the '''cortex''' and '''white''' surfaces obtained do not register accurately with the cortical surface. We will therefore use a [[Tutorials/TutVolSource|volume-based source estimation]] approach based on a volumic grid of elementary MEG source across the cerebrum (not a surface-constrained source model). We encourage users to [[Tutorials/SegCAT12|CAT12]] or [[Tutorials/LabelFreeSurfer|FreeSurfer]] to obtain surface segmentations of higher quality. . {{attachment:over_innerskul_cortex.png||width="50%"}} As the imported anatomy data is normalized in the MNI space, it is possible to apply use [[Tutorials/DefaultAnatomy#MNI_parcellations|MNI parcellation]] templates to define anatomical regions of the brain of the subject. These anatomical regions can be used to create [[Tutorials/TutVolSource#Volume_atlases|volume]] and [[Tutorials/Scouts|surface scouts]], which are convenient when performing the coherence analysis in the source level. Let's add the [[https://www.gin.cnrs.fr/en/tools/aal/|AAL3]] parcellation to the imported data. * Right-click on Subject01 then go to the menu '''''Add MNI parcellation > AAL3'''''. The menu will appear as '''''Download: AAL3''''' if the atlas is not in your system. Once the MNI atlas is downloaded, an atlas node (ICON) appears in the database explorer and the atlas is displayed in the the MRI viewer. . {{attachment:gui_mni_aal3.png}} <<BR>> == Access the recordings == === Link the recordings === * Switch to the '''Functional data''' view (X button). |
As part of the MRI segmentation pipeline with CAT12, the anatomy data was normalized in the MNI space, and several anatomical parcellations were computed. These parcellations can be used to create [[Tutorials/TutVolSource#Volume_atlases|volume]] and [[Tutorials/Scouts|surface scouts]], which will be used later in this tutorial to perform the coherence analysis in the source level. {{{#!wiki note Additional '''MNI parcellation templates''' to define anatomical regions of the brain can be used in Brainstorm for MNI-normalized MRI anatomy. See [[Tutorials/DefaultAnatomy#MNI_parcellations|MNI parcellations]] }}} == Review the MEG and EMG recordings == === Link the recordings to Brainstorm's database === * Switch now to the '''Functional data''' view (X button). |
Line 78: | Line 68: |
* Select the file format: '''MEG/EEG: CTF(*.ds; *.meg4; *.res4)''' | * Select the file format of current data from the pulldown menu options: '''MEG/EEG: CTF(*.ds; *.meg4; *.res4)''' |
Line 81: | Line 71: |
A new folder and its content are now visible in the database explorer: * The '''SubjectCMC''' folder represents the MEG dataset linked to the database. Note the tag "RAW" in the icon of the folder, this means that the files are considered as raw continuous files. * The '''CTF channels (191)''' node is the '''channel file''' and defines the types and names of channels that were recorded, the position of the sensors, the head shape and other various details. This information has been read from the MEG datasets and saved as a new file in the database. The total number of data channels recorded in the file is indicated between parenthesis '''(191)'''. * The '''Link to raw file''' node is a '''link to the original file''' that was selected. All the relevant metadata was read from the MEG dataset and copied inside the link itself (sampling rate, number of samples, event markers and other details about the acquisition session). As it is a link, no MEG recordings were copied to the database. When we open this file, the values are read directly from the original files in the .ds folder. [[Tutorials/ChannelFile#Review_vs_Import|More information]]. |
A new folder is now created in Brainstorm's database explorer and contains: * '''SubjectCMC''': a folder that provides access to the MEG dataset. Note the "RAW" tag over the icon of the folder, indicating the files contain unprocessed, continuous data. * '''CTF channels (191)''': a node containing '''channel information''' with all channel types, names locations, etc. The number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses''' (here, 191'''). * '''Link to raw file''' provides access to '''to the original data file'''. All the relevant metadata was read from the dataset and copied inside the node itself (e.g., sampling rate, number of time samples, event markers). Note that Brainstorm's logic is not to import/duplicate the raw unprocessed data directly into the database. Instead, Brainstorm provides a link to that raw file for further review and data extraction ([[Tutorials/ChannelFile#Review_vs_Import|more information]]). |
Line 90: | Line 80: |
* Right-click on the '''CTF channels (191)''' node, then '''''Display sensors > CTF helmet''''' and '''''Display sensors > MEG ''''' to show a surface that represents the inner surface the helmet, and the MEG sensors respectively. Try [[Tutorials/ChannelFile#Display_the_sensors|additional display menus]]. | * Right-click on the '''CTF channels (191)''' node, then select '''''Display sensors > CTF helmet''''' from the contextual menu and '''''Display sensors > MEG. '''''This will open a new display window showing the inner surface of the MEG helmet, and the lo MEG sensors respectively. Try [[Tutorials/ChannelFile#Display_the_sensors|additional display menus]]. |
Line 95: | Line 85: |
* Right-click on the '''Link to raw file''' node, then '''''Switch epoched/continuous''''' to convert the file to '''continuous'''. * Right-click on the '''Link to raw file''' node, then '''''MEG > Display time series''''' (or double-click on the node). This opens a new time series figure and enable the '''''Time''''' panel and the '''''Record''''' tab in the main Brainstorm window. Controls in this panel and tab are used to [[Tutorials/ReviewRaw|explore the time series]]. * In addition we can display the EMG signals, right-click on the '''Link to raw file''' node, then '''''EMG > Display time series'''''. |
* Right-click on the '''Link to raw file''' node, then '''''Switch epoched/continuous''''' to convert the file to '''continuous''', a technical detail proper to CTF file formatting. * Right-click again on the '''Link to raw file''' node, then '''''MEG > Display time series''''' (or double-click on the node). This will open a new visualization window to explore data time series, also enabling the '''''Time''''' panel and the '''''Record''''' tab in the main Brainstorm window (see how to best use all controls in this panel and tab to [[Tutorials/ReviewRaw|explore data time series]]). * We will also display EMG traces by right-clicking on the '''Link to raw file''' node, then '''''EMG > Display time series'''''. |
Line 102: | Line 92: |
The colored dots on top of the recordings in the time series figures represent the [[Tutorials/EventMarkers|event markers]] (or triggers) saved in this dataset. In addition to these events, the start of the either left or right trials is saved in the auxiliary channel named '''Stim'''. To add these markers: * With the time series figure open, in the '''''Record''''' tab go to '''''File > Read events from channel'''''. Now, in the options for the '''Read from channel''' process, set '''Event channels''' = `Stim`, select '''Value''', and click '''Run'''. |
The colored dots above the data time series indicate [[Tutorials/EventMarkers|event markers]] (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named '''Stim'''. To add these markers, these events need to be decoded as follows: * While the time series figure is open, go to the '''''Record''''' tab and '''''File > Read events from channel'''''. From the options of the '''Read from channel''' process window, set '''Event channels''' = `Stim`, select '''Value''', and click '''Run'''. |
Line 108: | Line 98: |
This creates new events shown in the '''''Events''''' section of the tab. We are only interested in the events from '''U1''' to '''U25''' that correspond to the 25 left trials. Thus we will delete the other events, and merge the left trial events. * Delete all the other events: select the events to delete with '''Ctrl+click''', when done go the menu '''''Events > Delete group''''' and confirm. Alternatively, you can do '''Ctrl+A''' to select all the events and then deselect the '''U1''' to '''U25''' events. * To be in line with the original FieldTrip tutorial, we will reject the trial 7. Select the events '''U1''' to '''U6''' and '''U8''' to '''U25''' events, then go the menu '''Events > Merge group''' and enter the label '''Left'''. |
This procedure creates new event markers now shown in the '''''Events''''' section of the tab. along with previous event categories. In this tutorial, we will only use events '''U1''' through '''U25''', which correspond to how each of the 25 left-wrist trials had been encoded in the study. We will now delete other events of no interest, and merge the left trial events under a single event category, for convenience. * Delete other events: select the events to delete in the event box/list with '''Ctrl+click''', then in the menu '''''Events > Delete group''''' and confirm. Alternatively, you can selected all events with '''Ctrl+A''' and deselect the '''U1''' to '''U25''' events by clicking on them. * To make sure we reproduce FieldTrip's tutorial, we need to reject trial #7: Select events '''U1''' to '''U6''' and '''U8''' to '''U25''', then from the '''Events '''menu, select''' Merge group''' and type in new label ('''Left_01''') to describe this as the left-wrist condition. |
Line 115: | Line 105: |
These events are located at the beginning of the 10 s trials of left wrist movement. In the sections below, we will compute the coherence for 1 s epochs for the first 8 s of the trial, thus we need to create extra events. | These events correspond to the beginning of 10-s trials of left-wrist movements. We will compute coherence over 1-s epochs over the first 8 s of each trial. To that purpose, we will now create extra events to define these epochs. |
Line 118: | Line 108: |
* For each copy of the '''Left''' events, we need to add a time offset of 1 s for '''Left02''', 2 s for '''Left03''', and so on. Select the event group to add the offset, then go to the menu '''''Events > Delete group'''''. | * For each copy of the '''Left''' events, we will add a time offset of 1 s for '''Left02''', 2 s for '''Left03''', and so on. Select the '''Left '''event group to add a 1,000 ms time offset, by going to the menu '''''Events > Add time offset'', '''enter 1,000 in the text box. Repeat for each other group, entering 2,000, then 3,000 etc. |
Line 122: | Line 112: |
* Finally, merge all the '''Left*''' events into '''Left''', and select '''''Save modifications''''' in the '''''File''''' menu in the '''''Record''''' tab. | * Once done for '''Left_08''', merge all these '''Left*''' events into a single '''Left '''category, and select '''''Save modifications''''' in the '''''File''''' menu in the '''''Record''''' tab. |
Line 143: | Line 133: |
In this tutorial will be analyzing only the '''Left''' trials. As such, in the following sections we will process only the first '''330 s''' of the recordings. }}} The CTF MEG recordings in this dataset were not saved with the desired 3rd order compensation. To continue with the pre-processing we need to apply the compensation. |
In this tutorial, we will analyze only the '''Left''' trials (left-wrist extensions). In the following sections, we will process only the first '''330 s''' of the recordings, where the left-wrist trials were performed. }}} Another idiosyncrasy of the present dataset is that the CTF MEG data were saved without the desired 3-rd order gradient compensation for optimal denoising. We will now apply this compensation as follows: |
Line 152: | Line 142: |
This process create the '''SubjectCMC_clean''' folder that contains a copy of the '''channel file''' and the raw recordings file '''Raw | clean''', which is de exact copy of the original data but with the CTF compensation applied. | This process creates the '''SubjectCMC_clean''' folder that contains a copy of the '''channel file''' and a link to the raw file '''Raw | clean''', which points to the original data and to the fact that the 3-rd order gradient compensation will be applied. Brainstorm does not create a physical copy of the actual, large dataset at this stage. |
Line 156: | Line 146: |
=== Power line artifacts === Let's start with locating the spectral components and impact of the power line noise in the MEG and EMG signals. |
=== Removal of power line artifacts === We will start with identifying the spectral components of power line contamination of MEG and EMG recordings. |
Line 168: | Line 158: |
* Double-click on the new '''PSD''' file to display it.<<BR>> | * Double-click on the new '''PSD''' file to visualize the power spectrum density of the data.<<BR>> |
Line 171: | Line 161: |
* The PSD shows two groups of sensors, EMG on top and MEG in the bottom. Also, there are peaks at 50Hz and its harmonics. We will use notch filters to remove the power line component and its first two components from the signals. | * The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below. Peaks at 50Hz and harmonics (150, 200Hz and above; European power line main and harmonics) are clearly visible. We will use notch filters to attenuate power line contaminants at 50, 150 and 200 Hz. * In the '''''Process1''''' box: Drag and drop the '''Raw | clean''' node. |
Line 179: | Line 170: |
A new '''raw''' folder named SubjectCMC_clean_notch will appear in the database explorer. Compute the PSD for the filtered signals to verify effect of the notch filters. Remember to compute for the '''Time window''' from 0 to 330 s. | A new '''raw''' folder named '''SubjectCMC_clean_notch''' is created. Estimate the PSD of these signals to appreciate the effect of the notch filters applied. As above, please remember to indicate a '''Time window''' restricted from 0 to 330 s in the options of the PSD process. |
Line 183: | Line 174: |
=== Pre-process EMG === Two of the typical pre-processing steps for EMG consist in high-pass filtering and rectifying. * In the '''''Process1''''' box: drag and drop the '''Raw | notch(50Hz 100Hz 150Hz)''' recordings node. |
=== EMG pre-processing === Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying. * In the '''''Process1''''' box: drag and drop the '''Raw | notch(50Hz 100Hz 150Hz)''' node. |
Line 199: | Line 190: |
Once the pipeline ends, the new folders '''SubjectCMC_clean_notch_high''' and '''SubjectCMC_clean_notch_high_abs''' are added to the database explorer. To avoid any confusion later, we can delete folders that will not be needed. * Delete the conditions '''SubjectCMC_clean_notch''' and '''SubjectCMC_clean_notch_high'''. Select both folders containing and press Delete (or right-click '''''File > Delete'''''). === Pre-process MEG === After applying the notch filter to the MEG signals, we still need to remove other type of artifacts, we will perform: 1. '''Detection and removal of artifacts with SSP''' 1. '''Detection of segments with other artifacts''' ==== Detection and removal of artifacts with SSP ==== In the case of stereotypical artifacts, as it is the case of the eye blinks and heartbeats, it is possible to identify their characteristic spatial distribution, and then remove it from MEG signals with methods such as Signal-Space Projection (SSP). For more details, consult the tutorials on [[Tutorials/ArtifactsDetect|detection]] and [[Tutorials/ArtifactsSsp|removal of artifacts with SSP]]. The dataset of this tutorial contains an EOG channel but not ECG signal, thus will perform only removal of eye blinks. * Display the MEG and EOG time series. Right-click on the pre-processed (for EMG) continuous file '''Raw | clean | notch(...''' (in the '''SubjectCMC_clean_notch_high_abs''' folder) then '''''MEG > Display time series''''' and '''''EOG > Display time series'''''. |
Two new folders '''SubjectCMC_clean_notch_high''' and '''SubjectCMC_clean_notch_high_abs''' are added to Brainstorm's database explorer. We can now safely delete folders that are not needed anymore: * Delete '''SubjectCMC_clean_notch''' and '''SubjectCMC_clean_notch_high '''by selecting both before pressing Delete (or right-click '''''File > Delete'''''). === MEG pre-processing === We need to remove more artifacts from the MEG traces via the: 1. '''Detection and removal of stereotypical artifacts with SSP''' 1. '''Detection of noisy (bad) data segments.''' ==== Detection and removal of artifacts with SSP (Signal Space Projection) ==== Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contaminationn of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the [[Tutorials/ArtifactsDetect|detection]] and [[Tutorials/ArtifactsSsp|removal of artifacts with SSP]]. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks. * Display the MEG and EOG time series: Right-click on the pre-processed (for EMG) continuous file '''Raw | clean | notch(...''' (in the '''SubjectCMC_clean_notch_high_abs''' folder) then '''''MEG > Display time series''''' and '''''EOG > Display time series'''''. |
Line 222: | Line 213: |
* As result, there will be 3 blink event groups. Review the traces of EOG channels and the blink events to be sure the detected events make sense. Note that the '''blink''' group contains the real blinks, and blink2 and blink3 contain mostly saccades. | * Three categories of blink events are created. Review the traces of EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the '''blink''' group contains genuine eye blinks, and that groups blink2 and blink3 capture saccade events. |
Line 233: | Line 224: |
* Display the time series and topographies for the first two components. Only the first one is clearly related to blink artifacts. Select only component #1 for removal. | * Display the time series and topographies of the first two (dominant) SSP components identified. In the present case, only the first SSP component can be clearly related to blinks. Select only component #1 for removal. |
Line 237: | Line 228: |
* Follow the same procedure for the other blink events ('''blink2''' and '''blink3'''). Note that none of first two components for the remaining blink events is clearly related to a ocular artifacts. This figure shows the first two components for the '''blink2''' group. | * Follow the same procedure for the other blink events ('''blink2''' and '''blink3'''). As mentioned above, none of the first two SSP components seem to be related to ocular artifacts. The figure below shows the visualization of the first two components for the '''blink2''' group. |
Line 241: | Line 232: |
. In this case, it is safer to unselect the '''blink2''' and '''blink3''' groups, rather than removing spatial components that we are not sure to identify. | . We therefore recommend to unselect the '''blink2''' and '''blink3''' groups from the '''Select Active Projectors''' panel (see below) rather than removing spatial components which nature remains ambiguous. |
Line 245: | Line 236: |
* Close all the figures ==== Detection of segments with other artifacts ==== Here we will used [[Tutorials/BadSegments#Automatic_detection|automatic detection of artifacts]]. It aims to identify typical artifacts such as the ones related to eye movements, subject movement and muscle contractions. * Display the MEG and EOG time series. In the '''''Record''''' tab, select '''''Artifacts > Detect other artifacts''''', use the following parameters: |
* Click on the large crosshair at the top right of the main Brainstorm window to close all visualization windows. ==== Detection of "bad" data segments: ==== Here we will use the [[Tutorials/BadSegments#Automatic_detection|automatic detection of artifacts]] to identify data segments contaminated by e.g., large eye and head movements and muscle contractions. * Display the MEG and EOG time series. In the '''''Record''''' tab, select '''''Artifacts > Detect other artifacts''''' and enter the following parameters: |
Line 258: | Line 249: |
While this process can help identify segments with artifacts in the signals, it is still advised to review the selected segments. After a quick browse, it can be noticed that the selected segments indeed correspond to irregularities in the MEG signal. Then, we will label these events are bad. * Select the '''1-7Hz''' and '''40-240Hz''' event groups and use the menu '''''Events > Mark group as bad'''''. Alternatively, you can rename the events and add the tag '''bad_''' in their name, it would have the same effect. |
We encourage users to review and validate the segments marked using this procedure. In the present case, the segments detected as bad clearly point at contaminated MEG data segments, which we will now label these as "bad". * Select the '''1-7Hz''' and '''40-240Hz''' event groups and select '''Events > Mark group as bad''' from the contextual menu. Alternatively, you can also rename the events created above and append the '''bad_''' prefix to their name: Brainstorm will automatically discard these data segments from further processing. |
Line 264: | Line 255: |
* Close all the figures, and save the modifications. == Importing the recordings == At this point we have finished with the pre-processing of our EMG and MEG recordings. Many operations operations can only be applied to short segments of recordings that have been imported in the database. We refer to these as '''epochs''' or '''trials'''. Thus, the next step is to import the data taking into account the '''Left''' events. |
* Close all visualization windows and reply "Yes" to the save the modifications query. == Importing data epochs == At this point we are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. We refer to these segments as '''epochs''' or '''trials'''. As mentioned previously, we will focus on the '''Left''' (wrist) category of events. |
Line 273: | Line 264: |
* Set the following parameters: | * Enter the following parameter values: |
Line 282: | Line 273: |
The new folder '''SubjectCMC_clean_notch_high_abs''' appears for '''Subject01'''. It contains a copy of the '''channel file''' in the continuous file, and the '''Left''' trial group. By expanding the trial group, we can notice that there are trials marked with an interrogation sign in a red circle (ICON). These '''bad''' trials are the ones that were overlapped with the '''bad''' segments identified in the previous section. All the bad trials are automatically ignored in the '''''Process1''''' and '''''Process2''''' tabs. | A new folder '''SubjectCMC_clean_notch_high_abs''' is created for '''Subject01'''. It contains a copy of the '''channel file''' from the original raw file, and individual trials tagged as '''Left '''in a new trial group. Expand the trial group and note there are trials marked with a question mark in a red circle (ICON). These indicate trials that occurred in the '''bad''' segments identified in the previous section. All the bad trials are automatically ignored for further processing, whenever dropped into the '''''Process1''''' and '''''Process2''''' tabs. |
Line 286: | Line 277: |
== Coherence (sensor level) == Once we have imported the trials, we will compute the '''magnitude square coherence (MSC)''' between the '''left EMG''' signal and the signals from each of the MEG sensors. * In the '''''Process1''''' box, drag and drop the '''Left (192 files)''' trial group. Note that the number between square brackets is '''[185]''', as the 7 '''bad''' trials are ignored. |
== Coherence estimation (sensor level) == We will now compute the '''magnitude square coherence (MSC)''' between the '''left EMG''' signal and each of the MEG sensor data. * In the '''''Process1''''' box, drag and drop the '''Left (192 files)''' trial group. Note that the number between square brackets is '''[185]''', as the 7 '''bad''' trials will be ignored by the MSC process. |
Line 292: | Line 283: |
* To compute the coherence between EMG and MEG signals. Run the process '''''Connectivity > Coherence 1xN [2021]''''' with the following parameters: | * Run the process '''''Connectivity > Coherence 1xN [2021]''''' with the following parameters: |
Line 301: | Line 292: |
* More details on the '''Coherence''' process can be found in the [[connectivity tutorial]]. | * More details on the '''Coherence''' process can be found in the [[Tutorials/Connectivity#Coherence|connectivity tutorial]]. |
Line 305: | Line 296: |
* Double-click on the resulting node '''mscohere(0.6Hz,555win): EMGlft''' to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press `Enter` to plot it in a new figure. This spectrum corresponds to channel '''MRC21''', and has its peak at 17.58 Hz. You can also use the frequency slider (below the '''''Time''''' panel) to explore the spectral representations. * Right-click on the spectrum and select '''2D Sensor cap''' for a spatial visualization of the coherence results, alternatively, the short cut `Ctrl-T` can be used. Once the '''2D Sensor cap''' is show, the sensor locations can be displayed with right-click then '''''Channels > Display sensors''''' or the shortcut `Ctrl-E`. |
* Double-click on the resulting node '''mscohere(0.6Hz,555win): EMGlft''' to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press `Enter` to plot it in a new figure. This spectrum corresponds to channel '''MRC21''', and shows a large peak at 17.58 Hz. You can also use the frequency slider (under the '''''Time''''' panel) to explore the MSC output more precisely across frequencies. * Right-click on the spectrum and select '''2D Sensor cap''' for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut `Ctrl-T`. The sensor locations can be displayed with a right-click and by selecting '''''Channels > Display sensors'' '''from the contextual menu (shortcut `Ctrl-E)`. |
Line 311: | Line 302: |
The results above are based in the identification of single peak, as alternative we can average the MSC in a given frequency band (15 - 20 Hz), and observe its topographical distribution. | We can now average magnitude of the MSC across a frequency band of interest (15-20 Hz): |
Line 319: | Line 310: |
The resulting file '''mscohere(0.6Hz,555win): EMGlft | tfbands''' has only one MSC value for each sensor (the average in the 15-20 Hz band). Thus, it is more useful to display the result in a spatial representation. Brainstorm provides 3 spatial representations: '''2D Sensor cap''', '''2D Sensor cap''' and '''2D Disk''', which are accessible with right-click on the MSC node. Sensor '''MRC21''' is selected as reference. | The resulting file '''mscohere(0.6Hz,555win): EMGlft | tfbands''' has only one MSC value for each sensor (the MSC average in the 15-20 Hz band). You may visualize the topography of this MSC statistics via 3 possible representations: '''2D Sensor cap''', '''2D Sensor cap''' and '''2D Disk''', which are all accessible via a right-click over the MSC node. We clicked on sensor '''MRC21''' below; it is shown in red. |
Line 323: | Line 314: |
In agreement with the literature, we observe higher MSC values between the EMG signal and the MEG signal for MEG sensors over the contralateral primary motor cortex in the beta band range. In the next sections we will perform source estimation and compute coherence in the source level. == Source analysis == In this tutorial we will perform source modelling using the [[Tutorials/HeadModel#Dipole_fitting_vs_distributed_models|distributed model]] approach for two sources spaces: '''cortex surface''' and '''MRI volume'''. In the first one the location of the sources is constrained to the cortical surface obtained when the subject anatomy was imported. For the second source space, the sources are uniformly distributed in the entire brain volume. Before estimating the brain sources, we need to compute '''head model''' and the '''noise covariance'''. Note that a head model is required for each source space. |
We can observe higher MSC values between the EMG signal and MEG sensor signals over the contralateral set of central sensors in the beta band. Unfortunately, [[Tutorials/Connectivity#Sensor-level|sensor-level connectivity present the disadvantages]] of: not being interpretable, and being subject to spurious results due to volume conduction. In the next section we will compute coherence in the source level. To do this, we first need to estimate the sources time series from the sensor data. == Source estimation with MEG signals == With [[Tutorials/SourceEstimation|source estimation]] we aim to find the sources that gave origin to the signals registers in the sensors. From the diverse [[Tutorials/SourceEstimation#Method|source estimation methods available in Brainstorm]], in this tutorial the '''minimum-norm imaging''']] method is used. There source grid (either cortical surface or full brain volume) is defined and the minimum-norm method estimates the current at each point in the grid to explain the recorded sensor signals favouring minimum energy (L2-norm) solutions. At each point in the source grid, the current dipole may point arbitrarily in direction in a 3D space. Only for surface grids, that orientation can be '''constrained''' to be normal to the cortical surface, this approach is based on anatomical observartions of the brain cortex. The result is then in a smaller model that is faster to compute and display. As discussion on constrained vs unconstrained sources can be found [[Tutorials/SourceEstimation#Why_does_it_look_so_noisy.3F|here]]. Before estimating the brain sources, we need to derive the '''noise covariance matrix'''of the sensors, and the '''head model'''. === Noise covariance === The [[Tutorials/NoiseCovariance#The_case_of_MEG|recommendation for MEG]], is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the [[Tutorials/NoiseCovariance|noise covariance tutorial]] for more details. * In the raw '''SubjectCMC_clean_notch_high_abs '''node, right-click over '''Raw | clean | notch(...'''and select '''''Noise covariance > Compute from recordings'''''. Please enter the following parameters: * '''Baseline:''' from `18 to 30 s` * Select the '''Block by block''' option. . {{attachment:pro_noise_cov.png||width="60%"}} * Copy the '''Noise covariance''' node to the '''SubjectCMC_clean_notch_high_abs''' folder. This can be done using the shortcuts `Ctrl-C` and `Ctrl-V`. . {{attachment:tre_covmat.png||width="50%"}} |
Line 329: | Line 339: |
The head model describes how neural electric currents produce magnetic fields and differences in electrical potentials at external sensors, given the different head tissues. This model is independent of sensor recordings. See the [[Tutorials/HeadModel|head model tutorial]] for more details. Each source space, requires its own head model. ==== Cortex surface ==== * In the '''SubjectCMC_clean_notch_high_abs''', right-click the '''CTF channels (191)''' node and select '''''Compute head model'''''. Keep the default options: |
The [[Tutorials/HeadModel|head model]], aka forward model, accounts for how neural electrical currents (in a source space) produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. As the head model depends on the source space, a distinct head model is required for the surface and volume source space. Please refer to the [[Tutorials/HeadModel|head model tutorial]] for more in-depth explanations. ==== Surface ==== Instructions. ==== Volume ==== Instructions. == Coherence estimation (source level) == === Surface === * Instructions for constrained * Instructions for unconstrained === Volume === * Instruction === Comparison === The results. Compare the 3, and the maximum point (How to find this?) Note: Unconstrained sources and volume model, the sources are 3 dimensional. Coherence is computed between the reference signal (EMGlft) and each dimension, to plot the value of coherence is the flatted as PROCEDURE As alternative we could flatten first the unconstrained sources, and the compute coherence, however, this is not the best approach. Show the results for the PCA. Here we can add the results and recommendations. As we mentioned, there is a coherence spectrum for each of the 15k sources. This is not practical. For that reason often the analysis is performed with the use of ROIs. The definition of ROIs is a current problem that is not solved here. == Coherence estimation (Scout level) == Here we will be working only with the surface unconstrained data. We will use the xxx atlas for our calculations Place, EMGlft and sources, selected scouts, and after and before. Explain the difference between both metrics, the advantages regarding processing. If possible add diagrams == Connectivity NxN (connectome) == Here we want to write about performing NxN connectivity with the scouts. This should not be done with sources as it leads to a very big matrix that will not fit in the RAM Also we need to define several parameters in our experiments: - source estimation method - source estimation surface or volume - if surface constrained or unconstrained - How to select the scouts - How to perform the aggregation across scouts (before or after) - Which function to use for the aggregation All those are open questions that are not addressed in this tutorial nor in the literature. <<BR>> ----- ----- ----- == MEG source imaging == We will perform source modelling using a [[Tutorials/HeadModel#Dipole_fitting_vs_distributed_models|distributed model]] approach for two possible source maps: the '''cortex surface''' and the entire '''MRI volume'''. For the cortical source model, the locations of the sources are constrained to the cortical surface obtained from the participants anatomical MRI, as imported above. For MRI volume approach, elementary sources are uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive a '''head model''' and estimate the '''noise covariance '''of the recordings. Note that a distinct head model is required for each source space tested here (cortical vs. volume). === Head modelling === The head model accounts for how neural electrical currents produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. Please refer to the [[Tutorials/HeadModel|head model tutorial]] for more in-depth explanations. ==== Cortical surface ==== * In the '''SubjectCMC_clean_notch_high_abs '''node, right-click over '''CTF channels (191)''' and select '''Compute head model '''from the contextual menu. Run the process with the options as indicated below: |
Line 337: | Line 394: |
Keep in mind that the number of sources (vertices) in this head model is '''10,000''', and was defined when when the subject anatomy was imported. | The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined when when the subject anatomy was imported. |
Line 341: | Line 398: |
The (ICON) '''Overlapping spheres (surface)''' head model will appear in the database explorer. ==== MRI volume ==== * In the '''SubjectCMC_clean_notch_high_abs''', right-click the '''CTF channels (191)''' node and select '''''Compute head model'''''. Keep the default options: |
The (ICON) '''Overlapping spheres (surface)''' head model now appears in the database explorer. ==== MRI volume source model ==== * In the '''SubjectCMC_clean_notch_high_abs '''node, right-click over the '''CTF channels (191)''' node and select '''''Compute head model'''''. Set the option values to: |
Line 351: | Line 408: |
* The '''Volume source grid''' window pop-up, to define the volume grid. Use the following parameters, that will lead to an estimated number of '''12,200''' grid points. | * In the '''Volume source grid''' window, specify the following parameters that will produce about '''12,200''' elementary source grid points across the brain volume. |
Line 357: | Line 414: |
The '''Overlapping spheres (volume)''' node will be added to the database explorer. The green color indicates the default head model for the folder. | The '''Overlapping spheres (volume)''' node is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.) |
Line 362: | Line 419: |
For MEG recordings it is [[Tutorials/NoiseCovariance#The_case_of_MEG|recommended]] to derive the noise covariance from empty room recordings. However, as we do not have those recordings in the dataset, we can compute the noise covariance from the MEG signals before the trials. See the [[Tutorials/NoiseCovariance|noise covariance tutorial]] for more details. * In the raw '''SubjectCMC_clean_notch_high_abs''', right-click the '''Raw | clean | notch(...''' node and select '''''Noise covariance > Compute from recordings'''''. As parameters select: * '''Baseline''' from `18 - 30 s` |
Basic noise statistics can be extracted (as [[Tutorials/NoiseCovariance#The_case_of_MEG|recommended]]) from empty-room recordings. However, these latter are not available from the present tutorial data. However, we will assume that data segments prior to wrist extension can be used as proxies for MEG noise covariance. See the [[Tutorials/NoiseCovariance|noise covariance tutorial]] for more details. * In the raw '''SubjectCMC_clean_notch_high_abs '''node, right-click over '''Raw | clean | notch(...'''and select '''''Noise covariance > Compute from recordings'''''. Please enter the following parameters: * '''Baseline:''' from `18 to 30 s` |
Line 370: | Line 427: |
* Lastly, copy the '''Noise covariance''' node to the '''SubjectCMC_clean_notch_high_abs''' folder with the head model. This can be done with the shortcuts `Ctrl-C` and `Ctrl-V`. | * Lastly, copy the '''Noise covariance''' node to the '''SubjectCMC_clean_notch_high_abs''' folder along with the head model. This can be done using the shortcuts `Ctrl-C` and `Ctrl-V`. |
Line 375: | Line 432: |
Noe that the '''head model(s)''' and '''noise covariance''' have been computed, we can use the [[Tutorials/SourceEstimation#Method|minimum norm imaging]] method to solve the '''inverse problem'''. The result is a linear '''inversion kernel''', that estimates the source brain activity that gives origin to the observed recordings in the sensors. Note that, an inversion kernel is obtained for each of the head models: '''surface''' and '''volume'''. See the [[Tutorials/SourceEstimation|source estimation tutorial]] for more details. ==== Cortex surface ==== * Compute the inversion kernel, right-click in the '''Overlapping spheres (surface)''' head model and select '''Compute sources [2018]'''. With the parameters: |
Now that the '''head model(s)''' and '''noise covariance''' are available, we will use the [[Tutorials/SourceEstimation#Method|minimum-norm imaging]] approach to solve the so-called '''inverse problem''' of source estimation. This will produce a large matrix called the '''imaging kernel''', that will be multiplied to the sensor data, to produce estimates of brain source time series. A different imaging kernel is derived for each of the head models we have produced above: '''surface''' and '''volume'''. See the [[Tutorials/SourceEstimation|source estimation tutorial]] for more details. ==== Cortical surface ==== * To compute the corresponding imaging kernel, right-click over the '''Overlapping spheres (surface)''' head model and select '''Compute sources [2018]. '''Enter the following parameters: |
Line 386: | Line 443: |
The inversion kernel (ICON) '''MN: MEG (surface)(Unconstr) 2018''' is created, and added to the database explorer. | The inversion kernel (ICON) '''MN: MEG (surface)(Unconstr) 2018''' is now available in the database explorer. |
Line 389: | Line 446: |
* Compute the inversion kernel, right-click in the '''Overlapping spheres (volume)''' head model and select '''Compute sources [2018]'''. With the parameters: | * Compute the corresponding imaging kernel by right-clicking over the '''Overlapping spheres (volume)''' head model and select '''Compute sources [2018], '''with the following parameters: |
Line 397: | Line 454: |
The inversion kernel (ICON) '''MN: MEG (volume)(Unconstr) 2018''' is created, and added to the database explorer. The green color in the name indicates the current default head model. In addition, note that each trial has now '''two''' associated source link (ICON) nodes. One obtained with the '''MN: MEG (surface)(Unconstr) 2018''' kernel and the other obtained with the '''MN: MEG (volume)(Unconstr) 2018''' kernel. | The imaging kernel (ICON) '''MN: MEG (volume)(Unconstr) 2018''' is now also available in the database explorer. Note that now each trial is associated with '''two''' source link (ICON) nodes. One corresponds to the '''MN: MEG (surface)(Unconstr) 2018''' kernel and the other to the '''MN: MEG (volume)(Unconstr) 2018''' kernel. |
Line 401: | Line 460: |
=== Scouts === From the [[#head_model|head model section]], we notice that the '''cortex''' and '''volume''' grid have around '''10,000''' vertices each, thus as many sources were estimated. As such, it is not practical to compute coherence between the left EMG signal and the signal of each source. A way to address this issue is with the use of regions of interest also known as '''scouts'''. Thus, there are [[Tutorials/Scouts|surface scouts]] and [[Tutorials/TutVolSource#Volume_scouts|volume scouts]]. Let's define scouts for the different source spaces. |
=== Defining "scouts" (regions of interest) === To contain computational cost, we will now restrict the computation of coherence with the left EMG signal to selected MEG source activity. We will define regions of interest, whic are known as '''scouts '''in Brainstorm. They can be defined as [[Tutorials/Scouts|surface]]or [[Tutorials/TutVolSource#Volume_scouts|volume scouts]]. |
Line 405: | Line 464: |
* In the source link '''MN: MEG (surface)(Unconstr) 2018''' node for one of the trials, right-click and select '''''Cortical activations > Display on cortex'''''. In the '''''Surface''''' tab, set the '''''Amplitude''''' slider to `100%` to hide all the sources. | * Right click over '''MN: MEG (surface)(Unconstr) 2018''' for one of the trials and select '''''Cortical activations > Display on cortex'''''. In the '''''Surface''''' tab, set the '''''Amplitude''''' slider to `100%` to hide all the sources. |
Line 409: | Line 468: |
* In the '''''Scout''''' tab, select the menu '''''Atlas > From subject anatomy > AAL3 (MNI-linear)'''''. This will create the '''From volume: AAL3''' set of surface scouts. By clicking in the different scouts, at the bottom of the list, the number of vertices it contains and the approximate area in cm2 is shown. Activate the (ICON) '''Show only the selected scouts''' option to narrow down the shown scouts. | * In the '''''Scout''''' tab, select from the menu '''''Atlas > From subject anatomy > AAL3 (MNI-linear)'''''. This will create the '''From volume: AAL3''' set of surface scouts. After you click on any scout label shown in the list, Brainstorm will show at the bottom the number of sources (vertices) within the region and an estimate of its surface area in cm2. Activate the (ICON) '''Show only the selected scouts''' option to shown only the scout(s) you have selected from the list. |
Line 419: | Line 478: |
* In the source link '''MN: MEG (volume)(Unconstr) 2018''' node for one of the trials, right-click and select '''''Cortical activations > Display on MRI (3D): Subject CMC'''''. In the '''''Surface''''' tab, set the '''''Amplitude''''' slider to `100%` to hide all the sources. | * Right click over '''MN: MEG (volume)(Unconstr) 2018''' for one of the trials and select '''''Cortical activations > Display on MRI (3D): Subject CMC'''''. In the '''''Surface''''' tab, set the '''''Amplitude''''' slider to `100%` to hide all the source map. |
Line 423: | Line 482: |
* In the '''''Scout''''' tab, select the menu '''''Atlas > From subject anatomy > AAL3 (MNI-linear)'''''. This will create the '''Volume 12203: AAL3''' set of volume scouts. By clicking in the different scouts, at the bottom of the list, the number of vertices it contains and the approximate volume in cm3 is shown. Activate the (ICON) '''Show only the selected scouts''' option to narrow down the shown scouts. | * In the '''''Scout''''' tab, select the menu '''''Atlas > From subject anatomy > AAL3 (MNI-linear)'''''. This will create a set of '''Volume 12203: AAL3''' volumic scouts. As with the surface scouts above, clicking on a scout in the list will show the number of elementary sources withinand its volume in cm3. Activate the (ICON) '''Show only the selected scouts''' option to show only the selected scout(s). |
Line 429: | Line 488: |
== Coherence (source level) == Coherence in the source level is computed between a sensor signal (EMG) and source signals in the (surface or volume) scouts. === Coherence with surface scouts === To facilitate the selection of the indicated files to compute this coherence, let's [[Tutorials/PipelineEditor#Search_Database|search in the database]] the recordings and the source link files obtained wit the '''MN: MEG (surface)(Unconstr) 2018''' kernel. * Click on the magnifying glass (ICON) above the database explorer to open up the search dialog, and select '''''New search'''''. * Set the search query to look for files that '''are named (surface)''' or '''are named Left'''. This is done with the following configuration. |
== Estimating source-wise coherence == === Surface scouts === We will use the [[Tutorials/PipelineEditor#Search_Database|search features of the Brainstorm database]] to select the source maps we want to include in the EMG-cortex coherence estimation. We will first select files related to the '''MN: MEG (surface)(Unconstr) 2018''' imaging kernel. * Click on the magnifying glass (ICON) above the database explorer to open the database search panel, and select '''''New search'''''. * Set the search parameters as shown below: |
Line 441: | Line 498: |
By performing the search, a new tab called '''''(surface)''''' appears in the database explorer. This new tab contains the recordings and ONLY the source link for the surface space. | After pressing '''Search''', a new tab called '''''(surface)''''' appears in the database explorer. This new tab contains the data filtered through the search query above. |
Line 445: | Line 502: |
* Change to the '''Process2''' tab, and drag-and-drop the '''Left (192 files)''' trial group into the '''Files A''' and into the '''Files B''' boxes. And select '''Process recordings''' for Files A, and '''Process sources''' for Files B. Note that blue labels over the '''Files A''' and the '''Files B''' boxes indicate that there are 185 files per box. | * Click the '''Process2''' tab at the bottom of the main Brainstorm window and drag-and-drop the '''Left (192 files)''' trial group into the '''Files A box '''and repeat for the '''Files B''' '''box'''. Select '''Process recordings''' for Files A, and '''Process sources''' for Files B. The logic is that we will extract from the same files the EMG traces (Files A) and the source time series (Files B) and then compute coherence between these two sets. Note that blue labels over the '''Files A''' and the '''Files B''' boxes indicate that there are 185 "good trial" files per box. |
Line 449: | Line 506: |
Open the '''Pipeline editor''' and add two process: | Open the '''Pipeline editor''': |
Line 473: | Line 530: |
* Double-click on the resulting node '''mscohere(0.6Hz,555win): Left (#1) | (surface)''' to display the coherence spectra. Also open the result node as image with '''Display image''' in its context menu. * To verify the location of the scouts on the cortex surface, double-click one of the (surface) source link for any of the trials. In the '''Surface''' tab, set the '''Amplitude''' threshold to `100%` to hide all the cortical activations. Lastly, in the '''Scouts''' tab, select the `From volume: AAL3` atlas in the drop-down list, select '''Show only the selected scouts''' and the '''Show/hide the scout labels'''. Note that the plots are linked by the scout selected in the '''image''' representation of the coherence results. |
* Double-click on the resulting node '''mscohere(0.6Hz,555win): Left (#1) | (surface)''' to display the coherence spectra. Open also the result node as an image with '''Display image''' from its contextual menu. * To visualize the location of the scouts on the cortex, double-click one of the (surface) source links for any of the trials. In the '''Surface''' tab, set the '''Amplitude''' threshold to `100%` to hide the cortical activations. Lastly, in the '''Scouts''' tab, select the `From volume: AAL3` atlas in the drop-down list, select '''Show only the selected scouts''' and the '''Show/hide the scout labels'''. Note that all visualization windows are linked depending on which scout is selected in the '''image''' representation of the coherence results. |
Line 479: | Line 536: |
From the results we can see that the peak at 14.65 Hz corresponds to the '''Precentral R''' scout, which encompasses the right primary motor cortex, as expected. These results are inline with the ones in the literature. === Coherence with volume scouts === Similar to coherence with surface scouts, a search is needed to select the recordings and the source link files obtained wit the '''MN: MEG (volume)(Unconstr) 2018''' kernel. * Click on the magnifying glass (ICON) above the database explorer to open up the search dialog, and select '''''New search'''''. Set the search query to look for files that '''are named (volume)''' or '''are named Left'''. This is done with the following configuration. |
The large peak at 14.65 Hz in the coherence spectra corresponds to the '''Precentral R''' scout, which encompasses the right primary motor cortex, as expected. === Volume scouts === We will repeat the same approach Similar for volume scouts. Iniate a file selection query related to the '''MN: MEG (volume)(Unconstr) 2018''' imaging kernel. * Click on the magnifying glass (ICON) above the database explorer to open the search panel, with the following specifications: |
Line 490: | Line 547: |
* Change to the '''Process2''' tab, and drag-and-drop the '''Left (192 files)''' trial group into the '''Files A''' and into the '''Files B''' boxes. And select '''Process recordings''' for Files A, and '''Process sources''' for Files B. | * In the '''Process2''' tab, drag and drop the '''Left (192 files)''' trial group into the '''Files A '''box and repeat for the '''Files B''' box. Select the '''Process recordings''' icon for Files A, and '''Process sources''' for Files B. |
Line 494: | Line 551: |
Open the '''Pipeline editor''' and add two process: | Open the '''Pipeline editor''': |
Line 518: | Line 575: |
* Double-click on the resulting node '''mscohere(0.6Hz,555win): Left (#1) | (volume)''' to display the coherence spectra. Also open the result node as image with '''Display image''' in its context menu. * To verify the location of the scouts on the cortex surface, open one of the (volume) source link for any of the trials with the as MRI (3D), in the context menu select '''''Display cortical activations > Display on MRI (3D): SubjectCMC '''''. In the '''Surface''' tab, set the '''Amplitude''' threshold to `100%` to hide all the cortical activations. Lastly, in the '''Scouts''' tab, select the `Volume 12203: AAL3` atlas in the drop-down list, select the '''Show only the selected scouts''' and the '''Show/hide the scout labels'''. Note that the plots are linked by the scout selected in the '''image''' representation of the coherence results. |
* Double-click on the resulting node '''mscohere(0.6Hz,555win): Left (#1) | (volume)''' to display the coherence spectra. Open also the result node by selecting '''Display as image''' from its contextual menu. * Open one of the (volume) source links for any of the trials by selecting from the contextual menu '''''Display cortical activations > Display on MRI (3D): SubjectCMC '''''. In the '''Surface''' tab, set the '''Amplitude''' threshold to `100%` to hide all the cortical activations. Lastly, in the '''Scouts''' tab, select the `Volume 12203: AAL3` atlas in the drop-down list, select the '''Show only the selected scouts''' and the '''Show/hide the scout labels'''. |
Line 524: | Line 581: |
From the results we can see that the peak at 14.65 Hz corresponds to the '''Precentral R''' scout, which encompasses the right primary motor cortex, as expected. These results are inline with the ones in the literature. | Here too, the coherence peak at 14.65 Hz corresponds to the '''Precentral R''' scout, which encompasses the right primary motor cortex, as expected. |
Line 528: | Line 585: |
== Coherence with all sources (no scouts) == | == Brain-wise coherence estimation (no scouts) == |
Line 535: | Line 592: |
=== Comparison of cortex surface with FieldTrip and CAT === | === Comparison of cortical surface with FieldTrip and CAT === |
Line 557: | Line 614: |
* Kilner JM, Baker SN, Salenius S, Hari R, Lemon RN. <<BR>> [[https://doi.org/10.1523/JNEUROSCI.20-23-08838.2000|Human Cortical Muscle Coherence Is Directly Related to Specific Motor Parameters]]. <<BR>> J Neurosci. 2000 Dec 1;20(23):8838–45. '' '' * ''Liu J, Sheng Y, Liu H. <<BR>> [[https://doi.org/10.3389/fnhum.2019.00100Corticomuscular%20Coherence%20and%20Its%20Applications:%20A%20Review|https://doi.org/10.3389/fnhum.2019.00100Corticomuscular%20Coherence%20and%20Its%20Applications:%20A%20Review]]. Front Hum Neurosci. 2019 Mar 20;13:100. '' |
* Kilner JM, Baker SN, Salenius S, Hari R, Lemon RN. <<BR>> [[https://doi.org/10.1523/JNEUROSCI.20-23-08838.2000|Human Cortical Muscle Coherence Is Directly Related to Specific Motor Parameters]]. <<BR>> J Neurosci. 2000 Dec 1;20(23):8838–45. * Liu J, Sheng Y, Liu H. <<BR>> [[https://doi.org/10.3389/fnhum.2019.00100|Corticomuscular Coherence and Its Applications: A Review]]. <<BR>> Front Hum Neurosci. 2019 Mar 20;13:100. |
Line 562: | Line 619: |
* ''Tutorial: [[Tutorials/TutVolSource|Volume source estimation]] '' * ''Tutorial: [[Tutorials/Connectivity|Functional connectivity]] '' |
* Tutorial: [[Tutorials/TutVolSource|Volume source estimation]] * Tutorial: [[Tutorials/Connectivity|Functional connectivity]] |
Line 569: | Line 626: |
''<<HTML(<!-- END-PAGE -->)>> '' ''<<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/CorticomuscularCoherence)>> '' |
<<HTML(<!-- END-PAGE -->)>> <<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/CorticomuscularCoherence)>> |
Corticomuscular coherence (MEG)
[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]
Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.
Corticomuscular coherence measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "Analysis of corticomuscular coherence" by FieldTrip.
Contents
- Background
- Dataset description
- Download and installation
- Importing and processing anatomy data
- Review the MEG and EMG recordings
- Pre-process
- Importing data epochs
- Coherence estimation (sensor level)
- Source estimation with MEG signals
- Coherence estimation (source level)
- Coherence estimation (Scout level)
- Connectivity NxN (connectome)
- MEG source imaging
- Estimating source-wise coherence
- Brain-wise coherence estimation (no scouts)
- Script
- Additional documentation
Background
Coherence measures the linear relationship between two signals in the frequency domain. Previous studies (Conway et al., 1995, Kilner et al., 2000) have reported cortico-muscular coherence effects in the 15–30 Hz range during maintained voluntary contractions.
IMAGE OF EXPERIMENT, SIGNALS and COHERENCE
Dataset description
The dataset comprises recordings from MEG (151-channel CTF MEG system) and bipolar EMG (from left and right extensor carpi radialis longus muscles) from one participant who was tasked to lift their hand and exert a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either the left or right wrist. EOG signals were also recorded, which will be useful for detection and attenuation of ocular artifacts. We will analyze the data from the left-wrist trials in the present tutorial. Replicating the pipeline with right-wrist data is a good exercise to do next!
Download and installation
Requirements: Please make sure you have completed the get-started tutorials and that you have a working copy of Brainstorm installed on your computer. In addition, you need to install CAT12 plugin in Brainstorm. CAT12 will be used for MRI segmentation.
Download the dataset:
Download SubjectCMC.zip from FieldTrip's FTP server:
ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/SubjectCMC.zip- Unzip the .zip in a folder not located in any of current Brainstorm's folders (the app per se or its database folder).
Brainstorm:
- Launch Brainstorm (via Matlab's command line or use Brainstorm's Matlab-free stand-alone version).
Select the menu File > Create new protocol. Name it TutorialCMC and select the options:
No, use individual anatomy,
No, use one channel file per acquisition run.
The next sections describe how to import the participant's anatomical data, review raw data, manage event markers, pre-process EMG and MEG signals, epoch and import recordings for further analyzes, with a focus on computing coherence at the sensor (scalp) and brain map (sources) levels.
Importing and processing anatomy data
Right-click on the newly created TutorialCMC node in your Brainstorm data tree then: New subject > Subject01.
Keep the default options defined for the study (aka "protocol" in Brainstorm's jargon).Switch to the Anatomy view of the protocol.
Right-click on the Subject01 node then Import MRI:
Select the adequate file format from the pull-down menu: All MRI file (subject space)
Select the file: SubjectCMC/SubjectCMC.mri
This will open the MRI viewer showing the coronal, sagittal and axial views of the MRI. In addition, three anatomical fiducials: left and right pre-auricular points (LPA and RPA), and nasion (NAS) are automatically identifed. These fiducials are located near the left/right ears and just above the nose respectively. Click on Save.
We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG head model (aka "forward model"). Here, we will perform MRI segmentation with CAT12, this process takes between 30 to 60 minutes.
Right-click on the SubjectCMC MRI node, then MRI segmentation > CAT12: Cortex, atlases, tissues. This will prompt a series of windows to set the parameters for the MRI segmentation, use the following parameters:
Number of vertices on the cortex surface use 15000
Compute anatomical parcellations? select Yes
Compute cortical maps select Yes
Once finished, multiple atlases or anatomical parcellations (ICON) will appear in the dataset tree alongside with surfaces for the head (head mask), white matter, cortex (pial envelope) and the midpoint between these last two. The default surfaces are indicated in green. You can display the surfaces by double-clicking on these new nodes. For further information on the anatomy files see the Display the anatomy tutorial.
As part of the MRI segmentation pipeline with CAT12, the anatomy data was normalized in the MNI space, and several anatomical parcellations were computed. These parcellations can be used to create volume and surface scouts, which will be used later in this tutorial to perform the coherence analysis in the source level.
Additional MNI parcellation templates to define anatomical regions of the brain can be used in Brainstorm for MNI-normalized MRI anatomy. See MNI parcellations
Review the MEG and EMG recordings
Link the recordings to Brainstorm's database
Switch now to the Functional data view (X button).
Right-click on the Subject01 node then Review raw file:
Select the file format of current data from the pulldown menu options: MEG/EEG: CTF(*.ds; *.meg4; *.res4)
Select the file: SubjectCMC.ds
A new folder is now created in Brainstorm's database explorer and contains:
SubjectCMC: a folder that provides access to the MEG dataset. Note the "RAW" tag over the icon of the folder, indicating the files contain unprocessed, continuous data.
CTF channels (191): a node containing channel information with all channel types, names locations, etc. The number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses (here, 191).
Link to raw file provides access to to the original data file. All the relevant metadata was read from the dataset and copied inside the node itself (e.g., sampling rate, number of time samples, event markers). Note that Brainstorm's logic is not to import/duplicate the raw unprocessed data directly into the database. Instead, Brainstorm provides a link to that raw file for further review and data extraction (more information).
Display MEG helmet and sensors
Right-click on the CTF channels (191) node, then select Display sensors > CTF helmet from the contextual menu and Display sensors > MEG. This will open a new display window showing the inner surface of the MEG helmet, and the lo MEG sensors respectively. Try additional display menus.
Reviewing continuous recordings
Right-click on the Link to raw file node, then Switch epoched/continuous to convert the file to continuous, a technical detail proper to CTF file formatting.
Right-click again on the Link to raw file node, then MEG > Display time series (or double-click on the node). This will open a new visualization window to explore data time series, also enabling the Time panel and the Record tab in the main Brainstorm window (see how to best use all controls in this panel and tab to explore data time series).
We will also display EMG traces by right-clicking on the Link to raw file node, then EMG > Display time series.
Event markers
The colored dots above the data time series indicate event markers (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named Stim. To add these markers, these events need to be decoded as follows:
While the time series figure is open, go to the Record tab and File > Read events from channel. From the options of the Read from channel process window, set Event channels = Stim, select Value, and click Run.
This procedure creates new event markers now shown in the Events section of the tab. along with previous event categories. In this tutorial, we will only use events U1 through U25, which correspond to how each of the 25 left-wrist trials had been encoded in the study. We will now delete other events of no interest, and merge the left trial events under a single event category, for convenience.
Delete other events: select the events to delete in the event box/list with Ctrl+click, then in the menu Events > Delete group and confirm. Alternatively, you can selected all events with Ctrl+A and deselect the U1 to U25 events by clicking on them.
To make sure we reproduce FieldTrip's tutorial, we need to reject trial #7: Select events U1 to U6 and U8 to U25, then from the Events menu, select Merge group and type in new label (Left_01) to describe this as the left-wrist condition.
These events correspond to the beginning of 10-s trials of left-wrist movements. We will compute coherence over 1-s epochs over the first 8 s of each trial. To that purpose, we will now create extra events to define these epochs.
Duplicate 7 times the Left events by selecting Duplicate group in the Events menu. The groups Left_02 to Left_08 will be created.
For each copy of the Left events, we will add a time offset of 1 s for Left02, 2 s for Left03, and so on. Select the Left event group to add a 1,000 ms time offset, by going to the menu Events > Add time offset, enter 1,000 in the text box. Repeat for each other group, entering 2,000, then 3,000 etc.
Once done for Left_08, merge all these Left* events into a single Left category, and select Save modifications in the File menu in the Record tab.
Pre-process
In this tutorial, we will analyze only the Left trials (left-wrist extensions). In the following sections, we will process only the first 330 s of the recordings, where the left-wrist trials were performed.
Another idiosyncrasy of the present dataset is that the CTF MEG data were saved without the desired 3-rd order gradient compensation for optimal denoising. We will now apply this compensation as follows:
In the Process1 box: Drag and drop the Link to raw file node.
Run process Artifacts > Apply SSP & CTF compensation:
This process creates the SubjectCMC_clean folder that contains a copy of the channel file and a link to the raw file Raw | clean, which points to the original data and to the fact that the 3-rd order gradient compensation will be applied. Brainstorm does not create a physical copy of the actual, large dataset at this stage.
Removal of power line artifacts
We will start with identifying the spectral components of power line contamination of MEG and EMG recordings.
In the Process1 box: Drag and drop the Raw | clean node.
Run process Frequency > Power spectrum density (Welch):
Time window: 0 - 330 s
Window length=10 s
Overlap=50%
Sensor types=`MEG, EMG
Double-click on the new PSD file to visualize the power spectrum density of the data.
- The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below. Peaks at 50Hz and harmonics (150, 200Hz and above; European power line main and harmonics) are clearly visible. We will use notch filters to attenuate power line contaminants at 50, 150 and 200 Hz.
In the Process1 box: Drag and drop the Raw | clean node.
Run the process Pre-processing > Notch filter with:
Sensor types = MEG, EMG
Frequencies to remove (Hz) = 50, 100, 150
A new raw folder named SubjectCMC_clean_notch is created. Estimate the PSD of these signals to appreciate the effect of the notch filters applied. As above, please remember to indicate a Time window restricted from 0 to 330 s in the options of the PSD process.
EMG pre-processing
Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying.
In the Process1 box: drag and drop the Raw | notch(50Hz 100Hz 150Hz) node.
Add the process Pre-process > Band-pass filter
Sensor types = EMG
Lower cutoff frequency = 10 Hz
Upper cutoff frequency = 0 Hz
Add the process Pre-process > Absolute values
Sensor types = EMG
- Run the pipeline
Two new folders SubjectCMC_clean_notch_high and SubjectCMC_clean_notch_high_abs are added to Brainstorm's database explorer. We can now safely delete folders that are not needed anymore:
Delete SubjectCMC_clean_notch and SubjectCMC_clean_notch_high by selecting both before pressing Delete (or right-click File > Delete).
MEG pre-processing
We need to remove more artifacts from the MEG traces via the:
Detection and removal of stereotypical artifacts with SSP
Detection of noisy (bad) data segments.
Detection and removal of artifacts with SSP (Signal Space Projection)
Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contaminationn of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the detection and removal of artifacts with SSP. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks.
Display the MEG and EOG time series: Right-click on the pre-processed (for EMG) continuous file Raw | clean | notch(... (in the SubjectCMC_clean_notch_high_abs folder) then MEG > Display time series and EOG > Display time series.
In the Events section of the Record tab, select Artifacts > Detect eye blinks, and use the parameters:
Channel name= EOG
Time window = 0 - 330 s
Event name = blink
Three categories of blink events are created. Review the traces of EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the blink group contains genuine eye blinks, and that groups blink2 and blink3 capture saccade events.
To remove blink artifacts with SSP go to Artifacts > SSP: Eye blinks, and use the parameters:
Event name=blink
Sensors=MEG
Check Compute using existing SSP/ICA projectors
- Display the time series and topographies of the first two (dominant) SSP components identified. In the present case, only the first SSP component can be clearly related to blinks. Select only component #1 for removal.
Follow the same procedure for the other blink events (blink2 and blink3). As mentioned above, none of the first two SSP components seem to be related to ocular artifacts. The figure below shows the visualization of the first two components for the blink2 group.
We therefore recommend to unselect the blink2 and blink3 groups from the Select Active Projectors panel (see below) rather than removing spatial components which nature remains ambiguous.
- Click on the large crosshair at the top right of the main Brainstorm window to close all visualization windows.
Detection of "bad" data segments:
Here we will use the automatic detection of artifacts to identify data segments contaminated by e.g., large eye and head movements and muscle contractions.
Display the MEG and EOG time series. In the Record tab, select Artifacts > Detect other artifacts and enter the following parameters:
Time window = 0 - 330 s
Sensor types=MEG
Sensitivity=3
Check both frequency bands 1-7 Hz and 40-240 Hz
We encourage users to review and validate the segments marked using this procedure. In the present case, the segments detected as bad clearly point at contaminated MEG data segments, which we will now label these as "bad".
Select the 1-7Hz and 40-240Hz event groups and select Events > Mark group as bad from the contextual menu. Alternatively, you can also rename the events created above and append the bad_ prefix to their name: Brainstorm will automatically discard these data segments from further processing.
- Close all visualization windows and reply "Yes" to the save the modifications query.
Importing data epochs
At this point we are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. We refer to these segments as epochs or trials. As mentioned previously, we will focus on the Left (wrist) category of events.
Right-click on the filtered continuous file Raw | clean | notch(... (in the SubjectCMC_clean_notch_high_abs condition), then Import in database.
- Enter the following parameter values:
Time window = 0 - 330 s
Check Use events and highlight the Left(x192) event group
Epoch time = 0 - 1000 ms
Check Apply SSP/ICA projectors
Check Remove DC offset and select All recordings
A new folder SubjectCMC_clean_notch_high_abs is created for Subject01. It contains a copy of the channel file from the original raw file, and individual trials tagged as Left in a new trial group. Expand the trial group and note there are trials marked with a question mark in a red circle (ICON). These indicate trials that occurred in the bad segments identified in the previous section. All the bad trials are automatically ignored for further processing, whenever dropped into the Process1 and Process2 tabs.
Coherence estimation (sensor level)
We will now compute the magnitude square coherence (MSC) between the left EMG signal and each of the MEG sensor data.
In the Process1 box, drag and drop the Left (192 files) trial group. Note that the number between square brackets is [185], as the 7 bad trials will be ignored by the MSC process.
Run the process Connectivity > Coherence 1xN [2021] with the following parameters:
Time window = 0 - 1000 ms or check All file
Source channel = EMGlft
Do not check Include bad channels nor Remove evoke response
Magnitude squared coherence
Window length for PSD estimation = 0.5 s
Overlap for PSD estimation = 50%
Highest frequency of interest = 80 Hz
Average cross-spectra of input files (one output file)
More details on the Coherence process can be found in the connectivity tutorial.
Double-click on the resulting node mscohere(0.6Hz,555win): EMGlft to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press Enter to plot it in a new figure. This spectrum corresponds to channel MRC21, and shows a large peak at 17.58 Hz. You can also use the frequency slider (under the Time panel) to explore the MSC output more precisely across frequencies.
Right-click on the spectrum and select 2D Sensor cap for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut Ctrl-T. The sensor locations can be displayed with a right-click and by selecting Channels > Display sensors from the contextual menu (shortcut Ctrl-E).
We can now average magnitude of the MSC across a frequency band of interest (15-20 Hz):
In the Process1 box, drag-and-drop the mscohere(0.6Hz,555win): EMGlft node, and add the process Frequency > Group in time or frequency bands with the parameters:
Select Group by frequency
Type cmc_band / 15, 20 / mean in the text box.
The resulting file mscohere(0.6Hz,555win): EMGlft | tfbands has only one MSC value for each sensor (the MSC average in the 15-20 Hz band). You may visualize the topography of this MSC statistics via 3 possible representations: 2D Sensor cap, 2D Sensor cap and 2D Disk, which are all accessible via a right-click over the MSC node. We clicked on sensor MRC21 below; it is shown in red.
We can observe higher MSC values between the EMG signal and MEG sensor signals over the contralateral set of central sensors in the beta band. Unfortunately, sensor-level connectivity present the disadvantages of: not being interpretable, and being subject to spurious results due to volume conduction.
In the next section we will compute coherence in the source level. To do this, we first need to estimate the sources time series from the sensor data.
Source estimation with MEG signals
With source estimation we aim to find the sources that gave origin to the signals registers in the sensors. From the diverse source estimation methods available in Brainstorm, in this tutorial the minimum-norm imaging]] method is used. There source grid (either cortical surface or full brain volume) is defined and the minimum-norm method estimates the current at each point in the grid to explain the recorded sensor signals favouring minimum energy (L2-norm) solutions.
At each point in the source grid, the current dipole may point arbitrarily in direction in a 3D space. Only for surface grids, that orientation can be constrained to be normal to the cortical surface, this approach is based on anatomical observartions of the brain cortex. The result is then in a smaller model that is faster to compute and display. As discussion on constrained vs unconstrained sources can be found here.
Before estimating the brain sources, we need to derive the noise covariance matrixof the sensors, and the head model.
Noise covariance
The recommendation for MEG, is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the noise covariance tutorial for more details.
In the raw SubjectCMC_clean_notch_high_abs node, right-click over Raw | clean | notch(...and select Noise covariance > Compute from recordings. Please enter the following parameters:
Baseline: from 18 to 30 s
Select the Block by block option.
Copy the Noise covariance node to the SubjectCMC_clean_notch_high_abs folder. This can be done using the shortcuts Ctrl-C and Ctrl-V.
Head model
The head model, aka forward model, accounts for how neural electrical currents (in a source space) produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. As the head model depends on the source space, a distinct head model is required for the surface and volume source space. Please refer to the head model tutorial for more in-depth explanations.
Surface
Instructions.
Volume
Instructions.
Coherence estimation (source level)
Surface
* Instructions for constrained * Instructions for unconstrained
Volume
* Instruction
Comparison
The results. Compare the 3, and the maximum point (How to find this?) Note: Unconstrained sources and volume model, the sources are 3 dimensional. Coherence is computed between the reference signal (EMGlft) and each dimension, to plot the value of coherence is the flatted as PROCEDURE
As alternative we could flatten first the unconstrained sources, and the compute coherence, however, this is not the best approach. Show the results for the PCA.
Here we can add the results and recommendations.
As we mentioned, there is a coherence spectrum for each of the 15k sources. This is not practical. For that reason often the analysis is performed with the use of ROIs. The definition of ROIs is a current problem that is not solved here.
Coherence estimation (Scout level)
Here we will be working only with the surface unconstrained data. We will use the xxx atlas for our calculations
Place, EMGlft and sources, selected scouts, and after and before. Explain the difference between both metrics, the advantages regarding processing. If possible add diagrams
Connectivity NxN (connectome)
Here we want to write about performing NxN connectivity with the scouts. This should not be done with sources as it leads to a very big matrix that will not fit in the RAM
Also we need to define several parameters in our experiments: - source estimation method - source estimation surface or volume - if surface constrained or unconstrained - How to select the scouts - How to perform the aggregation across scouts (before or after) - Which function to use for the aggregation
All those are open questions that are not addressed in this tutorial nor in the literature.
MEG source imaging
We will perform source modelling using a distributed model approach for two possible source maps: the cortex surface and the entire MRI volume.
For the cortical source model, the locations of the sources are constrained to the cortical surface obtained from the participants anatomical MRI, as imported above. For MRI volume approach, elementary sources are uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive a head model and estimate the noise covariance of the recordings. Note that a distinct head model is required for each source space tested here (cortical vs. volume).
Head modelling
The head model accounts for how neural electrical currents produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. Please refer to the head model tutorial for more in-depth explanations.
Cortical surface
In the SubjectCMC_clean_notch_high_abs node, right-click over CTF channels (191) and select Compute head model from the contextual menu. Run the process with the options as indicated below:
Comment = Overlapping spheres (surface)
Source space = Cortex surface
Forward model = Overlapping spheres.
The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined when when the subject anatomy was imported.
The (ICON) Overlapping spheres (surface) head model now appears in the database explorer.
MRI volume source model
In the SubjectCMC_clean_notch_high_abs node, right-click over the CTF channels (191) node and select Compute head model. Set the option values to:
Comment = Overlapping spheres (volume)
Source space = MRI volume
Forward model = Overlapping spheres.
In the Volume source grid window, specify the following parameters that will produce about 12,200 elementary source grid points across the brain volume.
Select Regular grid and Brain
Grid resolution = 5 mm
The Overlapping spheres (volume) node is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.)
Noise covariance
Basic noise statistics can be extracted (as recommended) from empty-room recordings. However, these latter are not available from the present tutorial data. However, we will assume that data segments prior to wrist extension can be used as proxies for MEG noise covariance. See the noise covariance tutorial for more details.
In the raw SubjectCMC_clean_notch_high_abs node, right-click over Raw | clean | notch(...and select Noise covariance > Compute from recordings. Please enter the following parameters:
Baseline: from 18 to 30 s
Select the Block by block option.
Lastly, copy the Noise covariance node to the SubjectCMC_clean_notch_high_abs folder along with the head model. This can be done using the shortcuts Ctrl-C and Ctrl-V.
Source estimation
Now that the head model(s) and noise covariance are available, we will use the minimum-norm imaging approach to solve the so-called inverse problem of source estimation. This will produce a large matrix called the imaging kernel, that will be multiplied to the sensor data, to produce estimates of brain source time series. A different imaging kernel is derived for each of the head models we have produced above: surface and volume. See the source estimation tutorial for more details.
Cortical surface
To compute the corresponding imaging kernel, right-click over the Overlapping spheres (surface) head model and select Compute sources [2018]. Enter the following parameters:
Minimum norm imaging
Current density map
Unconstrained
Comment = MN: MEG (surface)
The inversion kernel (ICON) MN: MEG (surface)(Unconstr) 2018 is now available in the database explorer.
MRI volume
Compute the corresponding imaging kernel by right-clicking over the Overlapping spheres (volume) head model and select Compute sources [2018], with the following parameters:
Minimum norm imaging
Current density map
Unconstrained
Comment = MN: MEG (volume)
The imaging kernel (ICON) MN: MEG (volume)(Unconstr) 2018 is now also available in the database explorer.
Note that now each trial is associated with two source link (ICON) nodes. One corresponds to the MN: MEG (surface)(Unconstr) 2018 kernel and the other to the MN: MEG (volume)(Unconstr) 2018 kernel.
Defining "scouts" (regions of interest)
To contain computational cost, we will now restrict the computation of coherence with the left EMG signal to selected MEG source activity. We will define regions of interest, whic are known as scouts in Brainstorm. They can be defined as surfaceor volume scouts.
Surface scouts
Right click over MN: MEG (surface)(Unconstr) 2018 for one of the trials and select Cortical activations > Display on cortex. In the Surface tab, set the Amplitude slider to 100% to hide all the sources.
In the Scout tab, select from the menu Atlas > From subject anatomy > AAL3 (MNI-linear). This will create the From volume: AAL3 set of surface scouts. After you click on any scout label shown in the list, Brainstorm will show at the bottom the number of sources (vertices) within the region and an estimate of its surface area in cm2. Activate the (ICON) Show only the selected scouts option to shown only the scout(s) you have selected from the list.
[TODO] A note, that the definition of scouts is far from perfect, but can give us a good idea of the surface projections of the MNI parcellations (described in the importing anatomy section).
- Close the figure.
Volume scouts
Right click over MN: MEG (volume)(Unconstr) 2018 for one of the trials and select Cortical activations > Display on MRI (3D): Subject CMC. In the Surface tab, set the Amplitude slider to 100% to hide all the source map.
In the Scout tab, select the menu Atlas > From subject anatomy > AAL3 (MNI-linear). This will create a set of Volume 12203: AAL3 volumic scouts. As with the surface scouts above, clicking on a scout in the list will show the number of elementary sources withinand its volume in cm3. Activate the (ICON) Show only the selected scouts option to show only the selected scout(s).
- Close the figure.
Estimating source-wise coherence
Surface scouts
We will use the search features of the Brainstorm database to select the source maps we want to include in the EMG-cortex coherence estimation. We will first select files related to the MN: MEG (surface)(Unconstr) 2018 imaging kernel.
Click on the magnifying glass (ICON) above the database explorer to open the database search panel, and select New search.
- Set the search parameters as shown below:
After pressing Search, a new tab called (surface) appears in the database explorer. This new tab contains the data filtered through the search query above.
Click the Process2 tab at the bottom of the main Brainstorm window and drag-and-drop the Left (192 files) trial group into the Files A box and repeat for the Files B box. Select Process recordings for Files A, and Process sources for Files B. The logic is that we will extract from the same files the EMG traces (Files A) and the source time series (Files and then compute coherence between these two sets. Note that blue labels over the Files A and the Files B boxes indicate that there are 185 "good trial" files per box.
Open the Pipeline editor:
Add the process Connectivity > Coherence AxB [2021] with the following parameters:
Time window = 0 - 1000 ms or check All file
Source channel (A) = EMGlft
Check Use scouts (B)
Select From volume: AAL3 in the drop-down list (these are surface scouts)
Select all the scouts (shortcut Ctrl-A)
Scout function = Mean
When to apply = Before
Do not Remove evoked responses from each trial
Magnitude squared coherence, Window length = 0.5 s
Overlap = 50%
Highest frequency = 80 Hz
Average cross-spectra.
Add the process File > Add tag with the following parameters:
Tag to add = (surface)
Select Add to file name
- Run the pipeline
Double-click on the resulting node mscohere(0.6Hz,555win): Left (#1) | (surface) to display the coherence spectra. Open also the result node as an image with Display image from its contextual menu.
To visualize the location of the scouts on the cortex, double-click one of the (surface) source links for any of the trials. In the Surface tab, set the Amplitude threshold to 100% to hide the cortical activations. Lastly, in the Scouts tab, select the From volume: AAL3 atlas in the drop-down list, select Show only the selected scouts and the Show/hide the scout labels. Note that all visualization windows are linked depending on which scout is selected in the image representation of the coherence results.
The large peak at 14.65 Hz in the coherence spectra corresponds to the Precentral R scout, which encompasses the right primary motor cortex, as expected.
Volume scouts
We will repeat the same approach Similar for volume scouts. Iniate a file selection query related to the MN: MEG (volume)(Unconstr) 2018 imaging kernel.
- Click on the magnifying glass (ICON) above the database explorer to open the search panel, with the following specifications:
In the Process2 tab, drag and drop the Left (192 files) trial group into the Files A box and repeat for the Files B box. Select the Process recordings icon for Files A, and Process sources for Files B.
Open the Pipeline editor:
Add the process Connectivity > Coherence AxB [2021] with the following parameters:
Time window = 0 - 1000 ms or check All file
Source channel (A) = EMGlft
Check Use scouts (B)
Select Volume 12203: AAL3 in the drop-down list (these are volume scouts)
Select all the scouts (shortcut Ctrl-A)
Scout function = Mean
When to apply = Before
Do not Remove evoked responses from each trial
Magnitude squared coherence, Window length = 0.5 s
Overlap = 50%
Highest frequency = 80 Hz
Average cross-spectra.
Add the process File > Add tag with the following parameters:
Tag to add = (surface)
Select Add to file name
- Run the pipeline
Double-click on the resulting node mscohere(0.6Hz,555win): Left (#1) | (volume) to display the coherence spectra. Open also the result node by selecting Display as image from its contextual menu.
Open one of the (volume) source links for any of the trials by selecting from the contextual menu Display cortical activations > Display on MRI (3D): SubjectCMC . In the Surface tab, set the Amplitude threshold to 100% to hide all the cortical activations. Lastly, in the Scouts tab, select the Volume 12203: AAL3 atlas in the drop-down list, select the Show only the selected scouts and the Show/hide the scout labels.
Here too, the coherence peak at 14.65 Hz corresponds to the Precentral R scout, which encompasses the right primary motor cortex, as expected.
Brain-wise coherence estimation (no scouts)
- We could downsample the surface and create a more sparse volume grid
- OR
- Refactor the coherence process to accumulate the auto- and cross-spectra outside of the function
- OR
Comparison of cortical surface with FieldTrip and CAT
[TO DISCUSS among authors] This image and GIF are just for reference. They were obtained with all the surface sources using ?FieldTrip and CAT derived surfaces.
Comparison for 14.65 Hz
Sweeping from 0 to 80 Hz
Script
[TO DO] Once we agree on all the steps above.
Additional documentation
Articles
Conway BA, Halliday DM, Farmer SF, Shahani U, Maas P, Weir AI, et al.
Synchronization between motor cortex and spinal motoneuronal pool during the performance of a maintained motor task in man.
The Journal of Physiology. 1995 Dec 15;489(3):917–24.Kilner JM, Baker SN, Salenius S, Hari R, Lemon RN.
Human Cortical Muscle Coherence Is Directly Related to Specific Motor Parameters.
J Neurosci. 2000 Dec 1;20(23):8838–45.Liu J, Sheng Y, Liu H.
Corticomuscular Coherence and Its Applications: A Review.
Front Hum Neurosci. 2019 Mar 20;13:100.
Tutorials
Tutorial: Volume source estimation
Tutorial: Functional connectivity
Forum discussions
[TO DO] Find relevant Forum posts.