13953
Comment:
|
49370
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
= MEG corticomuscular coherence = ''Authors: Raymundo Cassani '' [[https://en.wikipedia.org/wiki/Corticomuscular_coherence|Corticomuscular coherence]] relates to the synchrony between electrophisiological signals (MEG, EEG or ECoG) recorded from the contralateral motor cortex, and EMG signal from a muscle during voluntary movement. This synchrony has its origin mainly in the descending communication in corticospinal pathways between primary motor cortex (M1) and muscles. This tutorial replicates the processing pipeline and analysis presented in the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]] FieldTrip tutorial. <<TableOfContents(3,2)>> |
= Corticomuscular coherence (MEG) = '''[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE] ''' ''Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.'' [[https://en.wikipedia.org/wiki/Corticomuscular_coherence|Corticomuscular coherence]] measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "[[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]]" by FieldTrip. <<TableOfContents(2,2)>> |
Line 11: | Line 13: |
[[Tutorials/Connectivity#Coherence|Coherence]] is a classic method to measure the linear relationship between two signals in the frequency domain. Previous studies ([[https://dx.doi.org/10.1113/jphysiol.1995.sp021104|Conway et al., 1995]], [[https://doi.org/10.1523/JNEUROSCI.20-23-08838.2000|Kilner et al., 2000]]) have used coherence to study the relationship between MEG signals from M1 and muscles, and they have shown synchronized activity in the 15–30 Hz range during maintained voluntary contractions. | [[Tutorials/Connectivity#Coherence|Coherence]] measures the linear relationship between two signals in the frequency domain. Previous studies ([[https://dx.doi.org/10.1113/jphysiol.1995.sp021104|Conway et al., 1995]], [[https://doi.org/10.1523/JNEUROSCI.20-23-08838.2000|Kilner et al., 2000]]) have reported corticomuscular coherence effects in the 15–30 Hz range during maintained voluntary contractions. |
Line 16: | Line 18: |
The dataset is comprised of MEG (151-channel CTF MEG system) and bipolar EMG (from left and right extensor carpi radialis longus muscles) recordings from one subject during an experiment in which the subject had to lift her hand and exert a constant force against a lever. The force was monitored by strain gauges on the lever. The subject performed two blocks of 25 trials in which either the left or the right wrist was extended for about 10 seconds. Only data for the left wrist will be analyzed in this tutorial. | The dataset comprises MEG recordings (151-channel CTF MEG system) and bipolar EMG recordings (from left and right extensor carpi radialis longus muscles) from one participant who was tasked to lift their hand and exert a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either the left or right wrist. EOG signals were also recorded, which will be useful for detection and attenuation of ocular artifacts. We will analyze the data from the left-wrist trials in the present tutorial. Replicating the pipeline with right-wrist data is a good exercise to do next! |
Line 19: | Line 21: |
* '''Requirements''': You should have already followed all the introduction tutorials and you have a working copy of Brainstorm installed on your computer. | * '''Requirements''': Please make sure you have completed the [[Tutorials|get-started tutorials]] and that you have a working copy of Brainstorm installed on your computer. In addition, you need to [[Tutorials/SegCAT12#Install_CAT12|install CAT12 plugin]] in Brainstorm. [[http://www.neuro.uni-jena.de/cat/index.html|CAT12]] will be used for MRI segmentation. |
Line 21: | Line 23: |
* Download the `SubjectCMC.zip` file from FieldTrip FTP server: ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/SubjectCMC.zip * Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder). |
* Download `SubjectCMC.zip` from FieldTrip's FTP server:<<BR>> ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/SubjectCMC.zip * Unzip the .zip in a folder not located in any of current Brainstorm's folders (the app per se or its database folder). |
Line 24: | Line 26: |
* Start Brainstorm (Matlab scripts or stand-alone version). * Select the menu '''''File > Create new protocol'''''. Name it '''TutorialCMC''' and select the options: '''No, use individual anatomy''', <<BR>> '''No, use one channel file per acquisition run'''. The next sections will describe how to link import the subject's anatomy, reviewing raw data, managing event markers, pre-processing, epoching, source estimation and computation of coherence in the sensor and sources domain. == Importing anatomy data == * Right-click on the '''''TutorialCMC''''' node then '''''New subject > Subject01'''''.<<BR>>Keep the default options you defined for the protocol. * Switch to the '''Anatomy''' view of the protocol. * Right-click on the '''''Subject01''''' node then '''''Import MRI''''': * Set the file format: '''All MRI file (subject space)''' |
* Launch Brainstorm (via Matlab's command line or use Brainstorm's Matlab-free stand-alone version). * Select the menu '''''File > Create new protocol'''''. Name it `TutorialCMC` and select the options:<<BR>> '''No, use individual anatomy''', <<BR>> '''No, use one channel file per acquisition run'''. The next sections describe how to import the participant's anatomical data, review raw data, manage event markers, pre-process EMG and MEG signals, import recordings, and compute coherence in the sensor, sources and scout levels. == Importing and processing anatomy data == * Right-click on the newly created '''TutorialCMC''' node in your Brainstorm data tree then: <<BR>> '''''New subject > Subject01'''''.<<BR>>Keep the default options defined for the study (aka "protocol" in Brainstorm's jargon). * Switch to the '''Anatomy''' view of the protocol (<<Icon(iconSubjectDB.gif)>>). * Right-click on the '''Subject01''' node then '''''Import MRI''''': * Select the adequate file format from the pull-down menu: '''All MRI file (subject space)''' |
Line 37: | Line 39: |
* Compute MNI normalization, in the '''MRI viewer''' click on '''Click here to compute MNI normalization''', use the '''maff8''' method. When the normalization is complete, verify the correct location of the fiducials and click on '''Save'''. IMAGE after_mni_norm * Once the MRI has been imported and normalized, we will segment the head and brain tissues to obtain the surfaces that are needed for a realistic [[Tutorials/TutBem|BEM forward model]]. * Right-click on the '''''SubjectCMC''''' MRI node, then '''''MRI segmentation > FieldTrip: Tissues, BEM surfaces'''''. * Select all the tissues ('''scalp''', '''skull''', '''csf''', '''gray''' and '''white'''). * Click '''OK'''. * For the option '''Generate surface meshes''' select '''No'''. * After the segmentation is complete, a '''''tissues''''' node will be shown in the tree. * Rick-click on the '''''tissues''''' node and select '''Generate triangular meshes''' * Select the 5 layers to mesh * Use the default parameters: '''number of vertices''': 10,000; '''erode factor''': 0; and '''fill holes factor''' 2. As output, we get a set of (head and brain) surface files that will be used for BEM computation. IMAGE result_tree By displaying the surfaces, we can note that the '''cortex''', which is related to the gray matter (shown in red) overlaps heavily with the '''innerskull''' surface (shown in gray), so it cannot be used it for [[Tutorials/TutBem|BEM computation using OpenMEEG]]. However, as we are dealing with MEG signals, we can still compute the BEM with the [[Tutorials/HeadModel#Forward_model|overleaping-spheres method]], and obtain similar results. We can also notice that the '''cortex''' and '''white''' surfaces obtained with the method above do not register accurately the cortical surface, they can be used for [[Tutorials/TutVolSource|volume-based source estimation]], which is based on a volume grid of source points; but they do not be used for surface-based source estimation. Better surface surfaces can be obtained by doing MRI segmentation with [[Tutorials/SegCAT12|CAT12]] or [[Tutorials/LabelFreeSurfer|FreeSurfer]]. IMAGE overlap cortex and innerskul == Access the recordings == * Switch to the '''Functional data''' view (X button). * Right-click on the '''''Subject01''''' node then '''''Review raw file''''': * Select the file format: '''MEG/EEG: CTF(*.ds; *.meg4; *.res4)''' |
* This will open the '''MRI viewer''' showing the coronal, sagittal and axial views of the MRI. In addition, [[CoordinateSystems|three anatomical fiducials]]: left and right pre-auricular points (LPA and RPA), and nasion (NAS) are automatically identified. These fiducials are located near the left/right ears and just above the nose respectively. Click on '''Save'''. . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=mri_viewer.png|{{attachment:mri_viewer.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=mri_viewer.png}}]] We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG [[Tutorials/HeadModel|head models (aka "forward models")]]. Here, we will perform [[Tutorials/SegCAT12|MRI segmentation with CAT12]], this process takes between 30 to 60 minutes. * Right-click on the '''SubjectCMC''' MRI node (ICON), then select '''''MRI segmentation > CAT12: Cortex, atlases, tissues'''''. This will prompt a series of windows to set the parameters for the MRI segmentation, use the following parameters: * '''Number of vertices on the cortex surface''' use `15000` * '''Compute anatomical parcellations?''' select `Yes` * '''Compute cortical maps''' select `Yes` . {{attachment:cat12.png||width="100%"}} As part of the MRI segmentation pipeline with CAT12, the anatomy data is normalized in the MNI space, and several anatomical parcellations (ICON) are computed. CAT12 also computes the '''scalp''' surface (ICON), and the '''cortex''' surfaces (ICON) for white matter, pial envelope and the midpoint between them. The default surface of each type is indicated in green. In addition to the surfaces, multiple atlases or anatomical parcellations (ICON) are also present. For further information on the anatomy files see the [[Tutorials/ExploreAnatomy|Display the anatomy tutorial]]. . {{attachment:import_result.png||width="40%"}} * Right-click on the '''plial_15002''' surface as set it as default * Double-click on the '''head mask''' and then on the '''plial_15002''' surface to display them. * In addition, the registration between the MRI and the surfaces can be checked with context menu '''MRI registration > Check MRI/surface registration...''' . {{attachment:fig_anat_srf.png||width="60%"}} . {{attachment:fig_anat_registration.png}} {{{#!wiki note THIS IS NOT RELEVANT FOR THE CURRENT TUTORIAL<<BR>> Additional '''MNI parcellation templates''' to define anatomical regions of the brain can be used in Brainstorm for MNI-normalized MRI anatomy. See [[Tutorials/DefaultAnatomy#MNI_parcellations|MNI parcellations]] }}} == Review the MEG and EMG recordings == === Link the recordings to Brainstorm's database === * Switch now to the '''Functional data''' view (ICON). * Right-click on the '''Subject01''' node then '''''Review raw file''''': * Select the file format of current data from the pulldown menu options:<<BR>> '''MEG/EEG: CTF(*.ds; *.meg4; *.res4)''' |
Line 64: | Line 74: |
* A a new folder and its content is now visible in the database explorer: * The '''''SubjectCMC''''' folder represents the MEG dataset linked to the database. Note the tag "raw" in the icon of the folder, this means that the files are considered as new continuous files. * The '''''CTF channels (191)''''' node is the '''channel file''' and defines the types and names of channels that were recorded, the position of the sensors, the head shape and other various details. This information has been read from the MEG datasets and saved as a new file in the database. The total number of data channels recorded in the file is indicated between parenthesis (191). * The '''''Link to raw file''''' node is a '''link to the original file''' that you imported. All the relevant meta-data was read from the MEG dataset and copied inside the link itself (sampling rate, number of samples, event markers and other details about the acquisition session). As it is a link, no MEG recordings were copied to the database. When we open this file, the values are read directly from the original files in the .ds folder. [[Tutorials/ChannelFile#Review_vs_Import|More information]]. IMAGE functional result_tree * Right-click on the '''''CTF channels (191)''''' node, then '''''Display sensors > CTF helmet''''' and '''''Display sensors > MEG ''''' to show a surface that represents the inner surface the helmet, and the MEG sensors respectively. Try [[Tutorials/ChannelFile#Display_the_sensors|additional display menus]]. IMAGE helmet and sensors == Reviewing continuous recordings == * Right-click on the '''''Link to raw file''''' node, then '''''Switch epoched/continuous''''' to convert the file to '''continuous'''. * Right-click on the '''''Link to raw file''''' node, then '''''MEG > Display time series''''' (or double-click on the node). This will open a new time series figure and enable the '''Time panel''' and the '''Record''' tab in the main Brainstorm window. Controls in these two panels are used to [[Tutorials/ReviewRaw|explore the time series]]. * In addition we can display the EMG signals, right-click on the '''''Link to raw file''''' node, then '''''EMG > Display time series'''''. IMAGE both MEG up, MEG down == Event markers == The colored dots on top of the recordings in the time series figures represent the [[Tutorials/EventMarkers|event markers]] (or triggers) saved in this dataset. In addition to these events, the start of the either left or right trials is saved in the auxiliary channel named '''Stim'''. To add these markers: * With the time series figure open, in the '''Record''' tab go to '''''File > Read events from channel'''''. Now, in the options for the '''Read from channel''' process, set '''Event channels''': to `Stim`, select '''Value''', anc click '''Run'''. IMAGE Menu and Process New events will appear, from these, we are only interested in the events from '''U1''' to '''U25''' that correspond to the 25 left trials. Thus we will remove the other events, and merge the left trial events. * Delete all the other events: select the events to delete with '''Ctrl+click''', when done go the menu '''''Events > Delete group''''' and confirm. Alternatively, you can do '''Ctrl+A''' to select all the events and deselect the '''U1''' to '''U25''' events. * To be in line with the original FieldTrip tutorial, we will reject the trial 7. Select the events '''U1''' to '''U6''' and '''U8''' to '''U25''' events, then go the menu '''''Events > Merge group''''' and enter the label '''Left'''. IMAGE Left (x24) These events are located at the beginning of the 10 s trials of left wrist movement. In the following steps we will compute the coherence for 1 s epochs for the first 8 s of the trial, thus we need extra events. * Duplicate 7 times the '''Left''' events by selecting '''''Duplicate group''''' in the '''''Events''''' menu. This will create the groups '''Left_02 to Left_08'''. * To each copy of the '''Left''' events, we will add a time off, 1 s for '''Left02''', 2 s for '''Left03''', and so on. Select the event group to add the offset, then go to the menu '''''Events > Delete group'''''. IMAGE Menu duplicate, menu offset, process offset * Finally merge all the '''Left''' events, and close the time series. == Pre-process recordings == Two of the typical pre-processing steps consist in removing the power lines artifacts (50 Hz or 60Hz) and the frequencies we are not interested in (a low-pass filter to remove the high-frequency noise and a high-pass filter to remove the very slow components of the signals). Let's start with a spectral evaluation of this file. === Power spectrum === * In the Process1 box: Drag and drop the "Link to raw file". * Run process '''Frequency > Power spectrum density (Welch)''': All file, Length='''10s''', Overlap=50%.<<BR>> {{attachment:psd1.gif||width="510",height="305"}} * Double-click on the new PSD file to display it.<<BR>> {{attachment:psd2.gif||width="557",height="199"}} * This frequency spectrum does not show any particular peak at 50/60Hz, there is no notch filter to apply on these recordings. If we had to, we would run the process "Pre-processing > Notch filter" as explained in the tutorial [[Tutorials/ArtifactsFilter|Detect and remove artifacts]]. === Band-pass filter === The filters we selected for reviewing the recordings were for visualization only, they were not applied to the file. In order to apply these filters permanently to the recordings, we need to do the following: * Keep the "Link to raw file" selected in the Process1 list. * Run process '''Pre-process > Band-pass filter''': '''[0.5,80]Hz''', 60dB, no mirror, sensors='''EEG'''<<BR>><<BR>> {{attachment:bandpass.gif}} * Note that this new continuous file is saved in your Brainstorm database, while the original file is saved in a separate folder (sample_epilepsy). If you delete the link to the original file with the database explorer, it would not delete the actual file. If you delete the link to the filtered file, it would delete the file itself. == Importing the recordings == === Epoching === == Source analysis == == Coherence == === Sensor level === === Source level === |
A new folder is now created in Brainstorm's database explorer and contains: * '''SubjectCMC''': a folder that provides access to the MEG dataset. Note the "RAW" tag over the icon of the folder, indicating the files contain unprocessed, continuous data. * '''CTF channels (191)''': a node containing '''channel information''' with all channel types, names locations, etc. The number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses '''(here, 191)'''. * '''Link to raw file''' provides access to '''to the original data file'''. All the relevant metadata was read from the dataset and copied inside the node itself (e.g., sampling rate, number of time samples, event markers). Note that Brainstorm's logic is not to import/duplicate the raw unprocessed data directly into the database. Instead, Brainstorm provides a link to that raw file for further review and data extraction ([[Tutorials/ChannelFile#Review_vs_Import|more information]]). . {{attachment:review_raw.png}} <<BR>> === Display MEG helmet and sensors === * Right-click on the '''CTF channels (191)''' node, then select '''''Display sensors > CTF helmet''''' from the contextual menu and '''''Display sensors > MEG. '''''This will open a new display window showing the inner surface of the MEG helmet, and the lo MEG sensors respectively. Try [[Tutorials/ChannelFile#Display_the_sensors|additional display menus]]. || {{attachment:fig_helmet.png}} || || {{attachment:fig_sensors.png}} || === Reviewing continuous recordings === * Right-click on the '''Link to raw file''' node, then '''''Switch epoched/continuous''''' to convert the file to '''continuous''', a technical detail proper to CTF file formatting. * Right-click again on the '''Link to raw file''' node, then '''''MEG > Display time series''''' (or double-click on the node). This will open a new visualization window to explore data time series, also enabling the '''''Time''''' panel and the '''''Record''''' tab in the main Brainstorm window (see how to best use all controls in this panel and tab to [[Tutorials/ReviewRaw|explore data time series]]). * We will also display EMG traces by right-clicking on the '''Link to raw file''' node, then '''''EMG > Display time series'''''. . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=timeseries_meg_emg.png|{{attachment:timeseries_meg_emg.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=timeseries_meg_emg.png}}]] === Event markers === The colored dots above the data time series indicate [[Tutorials/EventMarkers|event markers]] (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named '''Stim'''. To add these markers, these events need to be decoded as follows: * While the time series figure is open, go to the '''''Record''''' tab and '''''File > Read events from channel'''''. From the options of the '''Read from channel''' process window, set '''Event channels''' = `Stim`, select '''Value''', and click '''Run'''. . {{attachment:read_evnt_ch.png}} This procedure creates new event markers now shown in the '''''Events''''' section of the tab. along with previous event categories. In this tutorial, we will only use events '''U1''' through '''U25''', which correspond to how each of the 25 left-wrist trials had been encoded in the study. We will now delete other events of no interest, and merge the left trial events under a single event category, for convenience. * Delete other events: select the events to delete in the event box/list with '''Ctrl+click''', then in the menu '''''Events > Delete group''''' and confirm. Alternatively, you can selected all events with '''Ctrl+A''' and deselect the '''U1''' to '''U25''' events by clicking on them. * To make sure we reproduce FieldTrip's tutorial, we need to reject trial #7: Select events '''U1''' to '''U6''' and '''U8''' to '''U25''', then from the '''Events '''menu, select''' Merge group''' and type in new label ('''Left_01''') to describe this as the left-wrist condition. . {{attachment:left_24.png}} These events correspond to the beginning of 10-s trials of left-wrist movements. We will compute coherence over 1-s epochs over the first 8 s of each trial. To that purpose, we will now create extra events to define these epochs. * Duplicate 7 times the '''Left''' events by selecting '''''Duplicate group''''' in the '''''Events''''' menu. The groups '''Left_02''' to '''Left_08''' will be created. * For each copy of the '''Left''' events, we will add a time offset of 1 s for '''Left02''', 2 s for '''Left03''', and so on. Select the '''Left '''event group to add a 1,000 ms time offset, by going to the menu '''''Events > Add time offset'', '''enter 1,000 in the text box. Repeat for each other group, entering 2,000, then 3,000 etc. . {{attachment:dup_offset.png}} * Once done for '''Left_08''', merge all these '''Left*''' events into a single '''Left '''category, and select '''''Save modifications''''' in the '''''File''''' menu in the '''''Record''''' tab. . {{attachment:left_192.png}} {{{#!wiki comment === Keep relevant recordings === As only data for the left wrist will be analyzed, we will import only the first '''330 s''' of the original file and rewrite that segment as a binary continuous file, a raw file. This will help to optimize computation times and memory usage. * In the Process1 box: Drag and drop the '''Link to raw file''' node inside '''SubjectCMC'''. * Run process '''Import > Import recordings > Import MEG/EEG: Time''':<<BR>> * '''Subject name'''=`Subject01`, '''Condition name'''= `Left`, '''Time window'''=`0.0 - 330.0 s`, '''Split recordings'''=`0`, and check the three remaining options.<<BR>> . {{attachment:import330_process.png||width="50%"}} * Right-click on the '''Raw(0.00s,330.00s)''' node inside the newly created '''Left''' condition and select '''Review as raw'''. This will crate the condition '''block001''' with the link to the created raw file. . {{attachment:review_as_raw.png||width="50%"}} * To avoid any confusion later, delete the conditions '''SubjectCMC''' (which is a link to the original file), and the condition '''Left'''. Select both folders containing and press Delete (or right-click '''File > Delete'''). }}} == Pre-process == {{{#!wiki note In this tutorial, we will analyze only the '''Left''' trials (left-wrist extensions). In the following sections, we will process only the first '''330 s''' of the recordings, where the left-wrist trials were performed. }}} Another idiosyncrasy of the present dataset is that the CTF MEG data were saved without the desired 3-rd order gradient compensation for optimal denoising. We will now apply this compensation as follows: * In the '''''Process1''''' box: Drag and drop the '''Link to raw file''' node. * Run process '''''Artifacts > Apply SSP & CTF compensation''''':<<BR>> . {{attachment:pro_ctf_compensation.png||width="50%"}} This process creates the '''SubjectCMC_clean''' folder that contains a copy of the '''channel file''' and a link to the raw file '''Raw | clean''', which points to the original data and to the fact that the 3-rd order gradient compensation will be applied. Brainstorm does not create a physical copy of the actual, large dataset at this stage. . {{attachment:tre_raw_clean.png||width="40%"}} === Removal of power line artifacts === We will start with identifying the spectral components of power line contamination of MEG and EMG recordings. * In the '''''Process1''''' box: Drag and drop the '''Raw | clean''' node. * Run process '''Frequency > Power spectrum density (Welch)''':<<BR>> * '''Time window''': `0 - 330 s` * '''Window length='''`10 s` * '''Overlap'''=`50%` * '''Sensor types'''=`MEG, EMG` . {{attachment:pro_psd.png||width="50%"}} * Double-click on the new '''PSD''' file to visualize the power spectrum density of the data.<<BR>> . {{attachment:psd_before_notch.png||width="70%"}} * The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below. Peaks at 50Hz and its harmonics (100, 150, 200Hz and above) correspond to the European power line, and are clearly visible. We will use notch filters to attenuate power line contaminants at 50, 100 and 150 Hz. * In the '''''Process1''''' box: Drag and drop the '''Raw | clean''' node. * Run the process '''''Pre-processing > Notch filter''''' with: <<BR>> * '''Sensor types''' = `MEG, EMG` * '''Frequencies to remove (Hz)''' = `50, 100, 150` . {{attachment:pro_notch.png||width="50%"}} A new '''raw''' folder named '''SubjectCMC_clean_notch''' is created. Estimate the PSD of these signals to appreciate the effect of the notch filters applied. As above, please remember to indicate a '''Time window''' restricted from 0 to 330 s in the options of the PSD process. . {{attachment:psd_after_notch.png||width="70%"}} === EMG pre-processing === Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying. * In the '''''Process1''''' box: drag and drop the '''Raw | notch(50Hz 100Hz 150Hz)''' node. * Add the process '''''Pre-process > Band-pass filter''''' * '''Sensor types''' = `EMG` * '''Lower cutoff frequency''' = `10 Hz` * '''Upper cutoff frequency''' = `0 Hz` * Add the process '''''Pre-process > Absolute values''''' * '''Sensor types''' = `EMG` * Run the pipeline . {{attachment:emg_processing.png||width="100%"}} Two new folders '''SubjectCMC_clean_notch_high''' and '''SubjectCMC_clean_notch_high_abs''' are added to Brainstorm's database explorer. We can now safely delete folders that are not needed anymore: * Delete '''SubjectCMC_clean_notch''' and '''SubjectCMC_clean_notch_high '''by selecting both before pressing Delete (or right-click '''''File > Delete'''''). === MEG pre-processing === We need to remove more artifacts from the MEG traces via the: 1. '''Detection and removal of stereotypical artifacts with SSP''' 1. '''Detection of noisy (bad) data segments.''' ==== Detection and removal of artifacts with SSP (Signal Space Projection) ==== Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contaminationn of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the [[Tutorials/ArtifactsDetect|detection]] and [[Tutorials/ArtifactsSsp|removal of artifacts with SSP]]. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks. * Display the MEG and EOG time series: Right-click on the pre-processed (for EMG) continuous file '''Raw | clean | notch(...''' (in the '''SubjectCMC_clean_notch_high_abs''' folder) then '''''MEG > Display time series''''' and '''''EOG > Display time series'''''. * In the '''Events''' section of the '''''Record''''' tab, select '''''Artifacts > Detect eye blinks''''', and use the parameters: * '''Channel name'''= `EOG` * '''Time window''' = `0 - 330 s` * '''Event name''' = `blink` . {{attachment:detect_blink_process.png||width="50%"}} * Three categories of blink events are created. Review the traces of EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the '''blink''' group contains genuine eye blinks, and that groups blink2 and blink3 capture saccade events. . {{attachment:blinks.png||width="70%"}} * To [[Tutorials/ArtifactsSsp|remove blink artifacts with SSP]] go to '''''Artifacts > SSP: Eye blinks''''', and use the parameters: * '''Event name'''=`blink` * '''Sensors'''=`MEG` * Check '''Compute using existing SSP/ICA projectors''' . {{attachment:ssp_blink_process.png||width="50%"}} * Display the time series and topographies of the first two (dominant) SSP components identified. In the present case, only the first SSP component can be clearly related to blinks. Select only component #1 for removal. . {{attachment:ssp_blink.png||width="100%"}} * Follow the same procedure for the other blink events ('''blink2''' and '''blink3'''). As mentioned above, none of the first two SSP components seem to be related to ocular artifacts. The figure below shows the visualization of the first two components for the '''blink2''' group. . {{attachment:ssp_blink2.png||width="100%"}} . We therefore recommend to unselect the '''blink2''' and '''blink3''' groups from the '''Select Active Projectors''' panel (see below) rather than removing spatial components which nature remains ambiguous. . {{attachment:ssp_active_projections.png||width="60%"}} * Click on the large crosshair at the top right of the main Brainstorm window to close all visualization windows. ==== Detection of "bad" data segments: ==== Here we will use the [[Tutorials/BadSegments#Automatic_detection|automatic detection of artifacts]] to identify data segments contaminated by e.g., large eye and head movements and muscle contractions. * Display the MEG and EOG time series. In the '''''Record''''' tab, select '''''Artifacts > Detect other artifacts''''' and enter the following parameters: * '''Time window''' = `0 - 330 s` * '''Sensor types'''=`MEG` * '''Sensitivity'''=`3` * Check both frequency bands '''1-7 Hz''' and '''40-240 Hz''' . {{attachment:detect_other.png||width="50%"}} We encourage users to review and validate the segments marked using this procedure. In the present case, the segments detected as bad clearly point at contaminated MEG data segments, which we will now label these as "bad". * Select the '''1-7Hz''' and '''40-240Hz''' event groups and select '''Events > Mark group as bad''' from the contextual menu. Alternatively, you can also rename the events created above and append the '''bad_''' prefix to their name: Brainstorm will automatically discard these data segments from further processing. . {{attachment:bad_other.png||width="50%"}} * Close all visualization windows and reply "Yes" to the save the modifications query. == Importing data epochs == At this point we are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. We refer to these segments as '''epochs''' or '''trials'''. As mentioned previously, we will focus on the '''Left''' (wrist) category of events. * Right-click on the filtered continuous file '''Raw | clean | notch(...''' (in the '''SubjectCMC_clean_notch_high_abs''' condition), then '''''Import in database'''''. . {{attachment:import_menu.png||width="40%"}} * Enter the following parameter values: * '''Time window''' = `0 - 330 s` * Check '''Use events''' and highlight the '''Left(x192)''' event group * '''Epoch time''' = `0 - 1000 ms` * Check '''Apply SSP/ICA projectors''' * Check '''Remove DC offset''' and select '''All recordings''' . {{attachment:import_options.png||width="80%"}} A new folder '''SubjectCMC_clean_notch_high_abs''' without the 'raw' indication is created for '''Subject01'''. Let's rename it as '''SubjectCMC_preprocessed'''. * Select the folder and press '''[F2]''' to set the new name. This action can also be done with the contextual menu '''File > Rename'''. The new folder contains a copy of the '''channel file''' from the original raw file, and individual trials tagged as '''Left '''in a new trial group. Expand the trial group and note there are trials marked with a question mark in a red circle (ICON). These indicate trials that occurred in the '''bad''' segments identified in the previous section. All the bad trials are automatically ignored for further processing, whenever dropped into the '''''Process1''''' and '''''Process2''''' tabs. . {{attachment:trials.png||width="40%"}} To have a glance of the signals after the pre-processing, plot the MEG signal for one sensor overlying the left motor-cortex (MRC21) and the EMG signals, both for trial 1. Note that these traces are similar to the ones obtained in the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]. . {{attachment:meg_trial_1.png||width="70%"}} . {{attachment:emg_trial_1.png||width="70%"}} == Coherence 1xN (sensor level) == We will now compute the '''magnitude square coherence (MSC)''' between the '''left EMG''' signal and each of the MEG sensor data. * In the '''''Process1''''' box, drag and drop the '''Left (192 files)''' trial group. Note that the number between square brackets is '''[185]''', as the 7 '''bad''' trials will be ignored by the MSC process. . {{attachment:dragdrop_trialgroup.png||width="40%"}} * Run the process '''''Connectivity > Coherence 1xN [2021]''''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' * '''Source channel''' = `EMGlft` * Do not check '''Include bad channels''' nor '''Remove evoke response''' * '''Magnitude squared coherence''' * '''Window length for PSD estimation''' = `0.5 s` * '''Overlap for PSD estimation''' = `50%` * '''Highest frequency of interest''' = `80 Hz` * '''Average cross-spectra of input files (one output file)''' * More details on the '''Coherence''' process can be found in the [[Tutorials/Connectivity#Coherence|connectivity tutorial]]. . {{attachment:coh_meg_emgleft.png||width="40%"}} * Double-click on the resulting node '''mscohere(0.6Hz,555win): EMGlft''' to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press `Enter` to plot it in a new figure. This spectrum corresponds to channel '''MRC21''', and shows a large peak at 17.58 Hz. You can also use the frequency slider (under the '''''Time''''' panel) to explore the MSC output more precisely across frequencies. * Right-click on the spectrum and select '''2D Sensor cap''' for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut `Ctrl-T`. The sensor locations can be displayed with a right-click and by selecting '''''Channels > Display sensors'' '''from the contextual menu (shortcut `Ctrl-E)`. . {{attachment:res_coh_meg_emgleft.png||width="80%"}} We can now average magnitude of the MSC across a frequency band of interest (15-20 Hz): * In the '''''Process1''''' box, drag-and-drop the '''mscohere(0.6Hz,555win): EMGlft''' node, and add the process '''''Frequency > Group in time or frequency bands''''' with the parameters: * Select '''Group by frequency''' * Type `cmc_band / 15, 20 / mean` in the text box. . {{attachment:pro_group_freq.png||width="40%"}} The resulting file '''mscohere(0.6Hz,555win): EMGlft | tfbands''' has only one MSC value for each sensor (the MSC average in the 15-20 Hz band). You may visualize the topography of this MSC statistics via 3 possible representations: '''2D Sensor cap''', '''2D Sensor cap''' and '''2D Disk''', which are all accessible via a right-click over the MSC node. We clicked on sensor '''MRC21''' below; it is shown in red. . {{attachment:res_coh_meg_emgleft1520.png||width="100%"}} We can observe higher MSC values between the EMG signal and MEG sensor signals over the contralateral set of central sensors in the beta band. Unfortunately, [[Tutorials/Connectivity#Sensor-level|sensor-level connectivity present the disadvantages]] of: not being interpretable, and being subject to spurious results due to volume conduction. In the next section we will compute coherence in the source level. To do this, we first need to estimate the sources time series from the sensor data. == MEG source modelling == We will perform source modelling using a [[Tutorials/HeadModel#Dipole_fitting_vs_distributed_models|distributed model]] approach for two possible source spaces: the '''cortex surface''' and the entire '''MRI volume'''. In the '''surface space''', a source grid is made with the source located on the cortical surface obtained from the participant's MRI. For the '''volume space''', the source grid consist of elementary sources uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive the sensor '''noise covariance matrix''', and the '''head model'''. === Noise covariance === The [[Tutorials/NoiseCovariance#The_case_of_MEG|recommendation for MEG]], is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the [[Tutorials/NoiseCovariance|noise covariance tutorial]] for more details. * In the raw '''SubjectCMC_clean_notch_high_abs '''node, right-click over '''Raw | clean | notch(...'''and select '''''Noise covariance > Compute from recordings'''''. Please enter the following parameters: * '''Baseline:''' from `18 to 30 s` * Select the '''Block by block''' option. . {{attachment:pro_noise_cov.png||width="60%"}} * Copy the '''Noise covariance''' (ICON) node to the '''SubjectCMC_preprocessed''' folder. This can be done using the shortcuts `Ctrl-C` and `Ctrl-V`. . {{attachment:tre_covmat.png||width="50%"}} === Head model === The [[Tutorials/HeadModel|head model]], aka forward model, accounts for how neural electrical currents (in a source space) produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. As the head model depends on the source space, a distinct head model is required for the surface and volume source spaces. Please refer to the [[Tutorials/HeadModel|head model tutorial]] for more in-depth explanations. ==== Surface ==== * Go to the '''Anatomy''' view of the database and verify that the '''pial_15002V''' surface is the default (green characters) cortex surface. Otherwise, right-click on it and select '''Set as default cortex''' in the contextual menu. * Go back '''Functional data''' view of the database, and inside the '''SubjectCMC_preprocessed'''folder, right-click over '''CTF channels (191)''' and select '''Compute head model '''from the contextual menu. Run the process with the options as indicated below: * '''Comment''' = `Overlapping spheres (surface)` * '''Source space''' = `Cortex surface` * '''Forward model''' = `MEG Overlapping spheres`. The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined in the default cortex. . {{attachment:pro_head_model_srf.png||width="40%"}} The '''Overlapping spheres (surface)''' head model (ICON) now appears in the database explorer. ==== Volume ==== * In the '''SubjectCMC_preprocessed'''folder, right-click over the '''CTF channels (191)''' node and select '''''Compute head model'''''. Set the option values to: * '''Comment''' = `Overlapping spheres (volume)` * '''Source space''' = `MRI volume` * '''Forward model''' = `Overlapping spheres`. . {{attachment:pro_head_model_vol.png||width="40%"}} * In the '''Volume source grid''' window, specify the following parameters that will produce around '''11,500''' source grid points across the brain volume. * Select '''Regular grid''' and '''Brain''' * '''Grid resolution''' = `5 mm` . {{attachment:pro_grid_vol.png||width="50%"}} The '''Overlapping spheres (volume)''' head model is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.) . {{attachment:tre_head_models.png||width="50%"}} == Source estimation == Now that the '''noise covariance''' and '''head model(s)''' are available, we will perform [[Tutorials/SourceEstimation|source estimation]], to find the sources that gave origin to the signals registers in the sensors. From the diverse [[Tutorials/SourceEstimation#Method|source estimation methods available in Brainstorm]], in this tutorial the '''minimum-norm imaging''' method is used. The minimum-norm method estimates the linear combination of the current at each point in the source grid that explains the recorded sensor signals favouring minimum energy (L2-norm) solutions. As result, a large matrix called the '''imaging kernel''' is obtained. By multiplying the imaging kernel with the sensor data, it is possible to obtain the estimates of brain sources time series. A different imaging kernel is derived for each of the head models we have produced above: '''surface''' and '''volume'''. See the [[Tutorials/SourceEstimation|source estimation tutorial]] for more details. {{{#!wiki note '''Each dipole in the source grid may point arbitrarily in any direction in a 3D space.''' <<BR>><<BR>> '''Only for surface grids''', the dipole orientation can be fixed to be normal to the cortical surface, this approach is based on anatomical observations of the brain cortex. The result is then in a smaller model that is faster to compute and display. <<BR>> A discussion on '''constrained''' vs '''unconstrained''' sources is presented [[Tutorials/SourceEstimation#Why_does_it_look_so_noisy.3F|here]]. }}} === Surface === Here we will estimate the sources in the surface space for '''constrained''' (normal to the cortex) and '''unconstrained''' dipole orientations. * Right-click on the '''Overlapping spheres (surface)''' head model and select '''Compute sources [2018]. '''Enter the following parameters: * '''Minimum norm imaging''' * '''Current density map''' * '''Constrained: Normal to the cortex''' * '''Comment''' = `MN: MEG (surface)` * Repeat the previous step, but this time select '''Unconstrained''' in the Dipole orientations field. || {{attachment:pro_sources_srfc.png}} || || {{attachment:pro_sources_srfu.png}} || The inversion kernels (ICON) '''MN: MEG (surface)(Constr) 2018''' and '''MN: MEG (surface)(Unconstr) 2018''' are now available in the database explorer. . {{attachment:tre_sources_srf.png||width="40%"}} ==== Volume ==== To compute the imaging kernel for the volume source space: * Right-click on the '''Overlapping spheres (volume)''' head model and select '''Compute sources [2018], '''with the following parameters: * '''Minimum norm imaging''' * '''Current density map''' * '''Unconstrained''' * '''Comment''' = `MN: MEG (volume)` . {{attachment:pro_sources_vol.png||width="40%"}} At this point the imaging kernel (ICON) '''MN: MEG (volume)(Unconstr) 2018''' is now also available in the database explorer. . {{attachment:tre_sources_vol.png||width="40%"}} ----- Note that each trial is associated with '''three''' source link (ICON) nodes, that correspond to each of the imaging kernels obtained above. . {{attachment:gui_inverse_kernel.png||width="60%"}} {{{#!wiki tip The '''source link''' nodes are not files containing the sources time series, instead, the links indicate Brainstorm to: load the corresponding MEG recordings, load the respective inverse kernel, and multiply the two on the fly to generate the sources time series. This approach saves space on the hard drive. }}} == Coherence 1xN (source level) == Once we have computed the time series for the sources, it is time to compute coherence between the EMG signal and brain sources obtained with each of the imaging kernels. Let's start with sources from the '''MN: MEG (surface)(Constr)''' kernel: * To select the source maps we want to include in the coherence estimation, click on the [[Tutorials/PipelineEditor#Search_Database|Search Database]] button (ICON), and select '''''New search'''''. * Set the search parameters as shown below, and click on '''Search'''. . {{attachment:gui_search_srf.png||width="70%"}} This will create a new tab in the database explorer. This new tab contains '''only''' the files that match the search criteria. . {{attachment:tre_search_srf.png||width="40%"}} * Click the '''Process2''' tab at the bottom of the main Brainstorm window and drag-and-drop the '''Left (192 files)''' trial group into the '''Files A''' box and repeat for the '''Files B''' box. Select '''Process recordings''' (ICON) for Files A, and '''Process sources''' (ICON) for Files B. The logic is that we will extract from the same files the EMG signal (Files A side) and the sources time series (Files B side), and then compute coherence between these two sets. Note that blue labels over the '''Files A''' and the '''Files B''' boxes indicate that there are 185 "good trial" files per box. . {{attachment:process2.png||width="80%"}} Open the '''Pipeline editor''': * Add the process '''Connectivity > Coherence AxB [2021]''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' * '''Source channel (A)''' = `EMGlft` * Uncheck '''Use scouts (B)''' * Do not '''Remove evoked responses from each trial''' * '''Magnitude squared coherence''', '''Window length''' = `0.5 s` * '''Overlap''' = `50%` * '''Highest frequency''' = `80 Hz` * '''Average cross-spectra'''. * Add the process '''File > Add tag''' with the following parameters: * '''Tag to add''' = `(surface)(Constr)` * Select '''Add to file name''' * Run the pipeline || {{attachment:pro_coh_srf.png}} || || {{attachment:pro_coh_srf2.png}} || * Once the processing is finish. Go to the '''Database''' tab of the database explorer and refresh it (with '''[F5]''') to show the resulting 1xN connectivity file (ICON) '''mscohere(0.6Hz,555win): Left (#1) | (surface)(Constr)'''. * Repeat the steps above to compute the EMG-sources coherence for the sources from the kernels '''MN: MEG (surface)(Unconstr)''' and '''MN: MEG (volume)(Unconstr)'''. Do not forget to update the '''search criteria''' and the '''tag''' to be added to the result. === Results (Surface) === Double-click the 1xN connectivity files for the '''(surface)''' source space to show the results on the cortex. If you are not familiar with the options in the cortex figures, check [[Tutorials/SourceEstimation#Display:_Cortex_surface|Display: Cortex surface]] Find the location and frequency with the highest coherence value. * Adjust the '''amplitude threshold''' to '''99%''' * Explore with coherence spectra with '''frequency slider''' The highest coherence value is located on the '''right primary motor cortex''' (precentral gyrus) at 14.65 Hz for the analysis using constrained and unconstrained sources. Set the the '''amplitude threshold''' to '''0%''' to see the extension of the high coherence values. || {{attachment:res_coh_srfc.png}} || || {{attachment:res_coh_srfu.png}} || ||<style="text-align:center">MSC @ 14.65Hz (surface)(Constr) || ||<style="text-align:center">MSC @ 14.65Hz (surface)(Unconstr) || We observe that results obtained with constrained and unconstrained sources agree in the location and frequency of the peak coherence. The main difference between these results is that coherence values obtained with unconstrained sources appear smoother, this caused by the maximum aggregation performed across directions (explained at detail in the next section. Finally, right-click on any of the cortex figures and select '''''Get coordinates'''''. Then click on the right motor cortex with the crosshair cursor that appears. The [[CoordinateSystems|SCS]] coordinates will be useful in the next section. <<BR>> SCS coordinates '''X:38.6''', '''Y:-21.3''' and '''Z: 115.5''' {{attachment:res_get_coordinates.png||width="80%"}} === Results (Volume) === Double-click the 1xN connectivity file for the '''(volume)''' source space. Note that this time the results are shown in the '''MRI viewer''' rather than the cortical surface. * Set '''frequency slider''' to 14.65 Hz * Set the SCS coordinates to '''X:38.6''', '''Y:-21.3''' and '''Z: 115.5''' * Set the '''Transparency''' of the coherence values (Data) in the '''''Surface''''' tab to '''30%''' ||<style="text-align:center"> {{attachment:res_coh_vol.png}} || ||<style="text-align:center">MSC @ 14.65Hz (volume)(Unconstr) || We note that all the results obtained with constrained (surface) and unconstrained (surface and volume) sources agree with each other, and in the location and frequency of the peak coherence. Moreover, they are in agreement with the our hypothesis, previous results in the literature [REFS], and the results presented in the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]. === Coherence with constrained and unconstrained sources === For '''constrained''' sources, each vertex in the source grid is associated with '''ONE''' time series, as such, when coherence is computed with the EMG signal (also one time series), the result is '''ONE''' coherence spectrum per vertex. In other words, for each frequency bin, there is coherence brain map. FIGURE (Diagram needed) In the case of '''unconstrained''' sources, each vertex in the grid is associated with '''THREE''' time series, each one corresponding to the X, Y and Z directions. Thus, when coherence is computed with the EMG signal (one time series), there are '''THREE''' coherence spectra. This complicates its representation in the brain, thus, these '''THREE''' coherence spectra need to be '''flattened''' into one, resulting in one coherence spectrum per vertex, the maximum across directions is found for each frequency bin for each vertex. FIGURE (Diagram needed) {{{#!wiki caution An alternative approach present in the literature, (REF), to address the 3dimensional nature of the unconstrained sources, consists in flattening the vertex X, Y and Z time series before the coherence computation; resulting in a similar case as the constrained sources. Common methods for this flattening include: '''PCA''' (only first component is kept), and [[https://en.wikipedia.org/wiki/Norm_(mathematics)|(Euclidean) norm]]. This flattening of the time series can be perform in Brainstorm with the process: Sources > '''Unconstrained to flat map'''. * Flattened sources are saved as full rather than recordings+kernel. * We have tested this flattening approach with simulations and the real data (from this tutorial) and we have found decremental effects on the expected results. }}} == Coherence 1xN (scout level) == So far we have computed coherence at the source level, thus, a coherence spectrum is computed for each of the 15002 source points. This large dimension hinders later analysis of the results. Therefore, the strategy is to reduce the dimensionality of the source space by using a [[surface-]] or [[volume-]]parcellation scheme, in Brainstorm jargon this is an '''atlas''' that is made of '''scouts'''. See the [[Tutorials/Scouts|scout tutorial]] for detail information on atlases and scouts in Brainstorm. Under this approach, instead of providing one result (coherence spectrum) per source vertex, one result is computed for each scout. When computing coherence (or other connectivity metrics) in the scout level, it is necessary to provide two parameters that define how the data is aggregated per scout: * The '''scout function''' (mean is often used), and * When the within-scout aggregation takes place. Either '''before''' or '''after''' the coherence estimation. * '''Before''': The scout function is applied for each direction on the vertices' source time series that make up a scout; resulting in one time series per direction per scout. Then, the scout time series are used to compute coherence with the reference signal (EMG in this tutorial), and the coherence spectra for each scout are aggregated across dimensions as shown in section XX, obtaining one coherence spectrum per scout. . FIGURE (Diagram needed) * '''After''': Coherence is computed between the reference signal and each direction of the vertices' source time series (as shown in section XX). Then, the scout function is applied on the coherence spectra for each direction of the vertices within a scout, finally these spectra are aggregated across dimensions to obtain a coherence spectrum per scout. . FIGURE (Diagram needed) {{{#!wiki tip As it can be seen, the '''After''' option takes longer and used more resources as it computes the coherence spectrum for each vertex in the scouts, and then, the coherence spectra are aggregated. }}} <<BR>> Let's here compute the coherence using scouts, using '''mean''' as scout function alongside with the '''Before''' option. We will use the [[Schaefer 100 parcellation]] atlas on the results from constrained sources. * Use Search Database () to select the '''Left''' trials with their respective '''(surface)(Constr)''' source maps, as shown in the [[previous section]]. * On the '''Process2''' tab drag-and-drop the '''Left (192 files)''' trial group into the '''Files A''' and '''Files B''' boxes. Select '''Process recordings''' (ICON) for Files A, and '''Process sources''' (ICON) for Files B. There should be '''185''' files in each side. . {{attachment:process2.png||width="80%"}} Open the '''Pipeline editor''': * Add the process '''Connectivity > Coherence AxB [2021]''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' * '''Source channel (A)''' = `EMGlft` * Check '''Use scouts (B)''' * From the menu at the right, select '''Schaefer_100_17net''' * Select all the scouts * '''Scout function''': `Mean` * '''When to apply the scout function''': `Before` * Do not '''Remove evoked responses from each trial''' * '''Magnitude squared coherence''', '''Window length''' = `0.5 s` * '''Overlap''' = `50%` * '''Highest frequency''' = `80 Hz` * '''Average cross-spectra'''. * Add the process '''File > Add tag''' with the following parameters: * '''Tag to add''' = `(surface)(Constr)` * Select '''Add to file name''' * Run the pipeline || {{attachment:pro_coh_srf_sct_bef.png}} || || {{attachment:pro_coh_srf_sct_bef2.png}} || Double-click the file. This time it is not displayed on the cortex, but the coherence spectrum for each scout is show. Right click, and it can be also shown as image. IMAGES {{{#!wiki caution The choice of the optimal parcellation scheme for the source space is not easy. <<BR>> The optimal choice is to choose a parcellation based on anatomy, for example the Brodmann parcellation. <<BR>> In Brainstorm these atlases are imported in Brainstorm as scouts (cortical regions of interest), and saved directly in the surface files as explained in this tutorial here. }}} {{{#!wiki caution BRAINSTORM TEAM <<BR>> Due to the current implementation of the bst_connectivity, the full source map for each trial (185) are loaded in memory, thus replicate the After, only for the (surface)(Constr) option ~30GB of RAM are needed! (Unconstrained take 3 times that). }}} == Coherence NxN (scout level) == So far we have computed coherence between a reference signal (EMG) and brain sources. In theory this could be computed at source level, but it is not practical cause of the size of the results. In the previous section the coherence between a reference signal and multiples ROI (scouts) was computed. However, depending on the experimental setup, it may be necessary to compute a connectivity metric between each pair of brain ROIs, resulting in a '''connectome'''. Thus a common approach is the use of scouts, as in the previous section. Here we want to write about performing NxN connectivity with the scouts. This should not be done with sources as it leads to a very big matrix that will not fit in the RAM In our previous examples, the reference signal was 1 dimension, but here it's 3D thus it makes the processing more complex, as there are 9 coherence spectra: Before: Scout aggregation, 3x3 for each scout pair, then the dimension aggregation. FIGURE (Diagram needed) After: Each source vs each source, thus there are 9 coherence spectra for each source-pair! then the results are aggregated for all the pairs, giving 9 coherence spectra between scouts, then it is aggregated across dimension. FIGURE (Diagram needed) Here the After option requires more memory than the previous section. Thus we will perform only the Before option. <<TAG(Advanced)>> |
Line 133: | Line 613: |
This should be label as advanced. | |
Line 141: | Line 620: |
* Liu J, Sheng Y, Liu H. <<BR>> [[https://doi.org/10.3389/fnhum.2019.00100Corticomuscular%20Coherence%20and%20Its%20Applications:%20A%20Review|https://doi.org/10.3389/fnhum.2019.00100Corticomuscular%20Coherence%20and%20Its%20Applications:%20A%20Review]]. Front Hum Neurosci. 2019 Mar 20;13:100. {{{#!wiki comment * Schoffelen J-M, Poort J, Oostenveld R, Fries P. <<BR>> [[https://doi.org/10.1523/JNEUROSCI.4882-10.2011|Selective Movement Preparation Is Subserved by Selective Increases in Corticomuscular Gamma-Band Coherence]]. Journal of Neuroscience. 2011 May 4;31(18):6750–8. }}} |
* Liu J, Sheng Y, Liu H. <<BR>> [[https://doi.org/10.3389/fnhum.2019.00100|Corticomuscular Coherence and Its Applications: A Review]]. <<BR>> Front Hum Neurosci. 2019 Mar 20;13:100. |
Line 150: | Line 624: |
* Tutorial: [[Tutorials/Connectivity|Functional connectivity]] | |
Line 152: | Line 627: |
* Forum: Minimum norm units (pA.m): [[http://neuroimage.usc.edu/forums/showthread.php?1246-Doubt-about-current-density-units-pA.m-or-pA-m2|http://neuroimage.usc.edu/forums/showthread.php?1246]] |
{{{ }}} |
Corticomuscular coherence (MEG)
[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]
Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.
Corticomuscular coherence measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "Analysis of corticomuscular coherence" by FieldTrip.
Contents
- Background
- Dataset description
- Download and installation
- Importing and processing anatomy data
- Review the MEG and EMG recordings
- Pre-process
- Importing data epochs
- Coherence 1xN (sensor level)
- MEG source modelling
- Source estimation
- Coherence 1xN (source level)
- Coherence 1xN (scout level)
- Coherence NxN (scout level)
- Script
- Additional documentation
Background
Coherence measures the linear relationship between two signals in the frequency domain. Previous studies (Conway et al., 1995, Kilner et al., 2000) have reported corticomuscular coherence effects in the 15–30 Hz range during maintained voluntary contractions.
IMAGE OF EXPERIMENT, SIGNALS and COHERENCE
Dataset description
The dataset comprises MEG recordings (151-channel CTF MEG system) and bipolar EMG recordings (from left and right extensor carpi radialis longus muscles) from one participant who was tasked to lift their hand and exert a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either the left or right wrist. EOG signals were also recorded, which will be useful for detection and attenuation of ocular artifacts. We will analyze the data from the left-wrist trials in the present tutorial. Replicating the pipeline with right-wrist data is a good exercise to do next!
Download and installation
Requirements: Please make sure you have completed the get-started tutorials and that you have a working copy of Brainstorm installed on your computer. In addition, you need to install CAT12 plugin in Brainstorm. CAT12 will be used for MRI segmentation.
Download the dataset:
Download SubjectCMC.zip from FieldTrip's FTP server:
ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/SubjectCMC.zip- Unzip the .zip in a folder not located in any of current Brainstorm's folders (the app per se or its database folder).
Brainstorm:
- Launch Brainstorm (via Matlab's command line or use Brainstorm's Matlab-free stand-alone version).
Select the menu File > Create new protocol. Name it TutorialCMC and select the options:
No, use individual anatomy,
No, use one channel file per acquisition run.
The next sections describe how to import the participant's anatomical data, review raw data, manage event markers, pre-process EMG and MEG signals, import recordings, and compute coherence in the sensor, sources and scout levels.
Importing and processing anatomy data
Right-click on the newly created TutorialCMC node in your Brainstorm data tree then:
New subject > Subject01.
Keep the default options defined for the study (aka "protocol" in Brainstorm's jargon).Switch to the Anatomy view of the protocol ().
Right-click on the Subject01 node then Import MRI:
Select the adequate file format from the pull-down menu: All MRI file (subject space)
Select the file: SubjectCMC/SubjectCMC.mri
This will open the MRI viewer showing the coronal, sagittal and axial views of the MRI. In addition, three anatomical fiducials: left and right pre-auricular points (LPA and RPA), and nasion (NAS) are automatically identified. These fiducials are located near the left/right ears and just above the nose respectively. Click on Save.
We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG head models (aka "forward models"). Here, we will perform MRI segmentation with CAT12, this process takes between 30 to 60 minutes.
Right-click on the SubjectCMC MRI node (ICON), then select MRI segmentation > CAT12: Cortex, atlases, tissues. This will prompt a series of windows to set the parameters for the MRI segmentation, use the following parameters:
Number of vertices on the cortex surface use 15000
Compute anatomical parcellations? select Yes
Compute cortical maps select Yes
As part of the MRI segmentation pipeline with CAT12, the anatomy data is normalized in the MNI space, and several anatomical parcellations (ICON) are computed. CAT12 also computes the scalp surface (ICON), and the cortex surfaces (ICON) for white matter, pial envelope and the midpoint between them. The default surface of each type is indicated in green. In addition to the surfaces, multiple atlases or anatomical parcellations (ICON) are also present. For further information on the anatomy files see the Display the anatomy tutorial.
Right-click on the plial_15002 surface as set it as default
Double-click on the head mask and then on the plial_15002 surface to display them.
In addition, the registration between the MRI and the surfaces can be checked with context menu MRI registration > Check MRI/surface registration...
THIS IS NOT RELEVANT FOR THE CURRENT TUTORIAL
Additional MNI parcellation templates to define anatomical regions of the brain can be used in Brainstorm for MNI-normalized MRI anatomy. See MNI parcellations
Review the MEG and EMG recordings
Link the recordings to Brainstorm's database
Switch now to the Functional data view (ICON).
Right-click on the Subject01 node then Review raw file:
Select the file format of current data from the pulldown menu options:
MEG/EEG: CTF(*.ds; *.meg4; *.res4)Select the file: SubjectCMC.ds
A new folder is now created in Brainstorm's database explorer and contains:
SubjectCMC: a folder that provides access to the MEG dataset. Note the "RAW" tag over the icon of the folder, indicating the files contain unprocessed, continuous data.
CTF channels (191): a node containing channel information with all channel types, names locations, etc. The number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses (here, 191).
Link to raw file provides access to to the original data file. All the relevant metadata was read from the dataset and copied inside the node itself (e.g., sampling rate, number of time samples, event markers). Note that Brainstorm's logic is not to import/duplicate the raw unprocessed data directly into the database. Instead, Brainstorm provides a link to that raw file for further review and data extraction (more information).
Display MEG helmet and sensors
Right-click on the CTF channels (191) node, then select Display sensors > CTF helmet from the contextual menu and Display sensors > MEG. This will open a new display window showing the inner surface of the MEG helmet, and the lo MEG sensors respectively. Try additional display menus.
|
|
|
Reviewing continuous recordings
Right-click on the Link to raw file node, then Switch epoched/continuous to convert the file to continuous, a technical detail proper to CTF file formatting.
Right-click again on the Link to raw file node, then MEG > Display time series (or double-click on the node). This will open a new visualization window to explore data time series, also enabling the Time panel and the Record tab in the main Brainstorm window (see how to best use all controls in this panel and tab to explore data time series).
We will also display EMG traces by right-clicking on the Link to raw file node, then EMG > Display time series.
Event markers
The colored dots above the data time series indicate event markers (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named Stim. To add these markers, these events need to be decoded as follows:
While the time series figure is open, go to the Record tab and File > Read events from channel. From the options of the Read from channel process window, set Event channels = Stim, select Value, and click Run.
This procedure creates new event markers now shown in the Events section of the tab. along with previous event categories. In this tutorial, we will only use events U1 through U25, which correspond to how each of the 25 left-wrist trials had been encoded in the study. We will now delete other events of no interest, and merge the left trial events under a single event category, for convenience.
Delete other events: select the events to delete in the event box/list with Ctrl+click, then in the menu Events > Delete group and confirm. Alternatively, you can selected all events with Ctrl+A and deselect the U1 to U25 events by clicking on them.
To make sure we reproduce FieldTrip's tutorial, we need to reject trial #7: Select events U1 to U6 and U8 to U25, then from the Events menu, select Merge group and type in new label (Left_01) to describe this as the left-wrist condition.
These events correspond to the beginning of 10-s trials of left-wrist movements. We will compute coherence over 1-s epochs over the first 8 s of each trial. To that purpose, we will now create extra events to define these epochs.
Duplicate 7 times the Left events by selecting Duplicate group in the Events menu. The groups Left_02 to Left_08 will be created.
For each copy of the Left events, we will add a time offset of 1 s for Left02, 2 s for Left03, and so on. Select the Left event group to add a 1,000 ms time offset, by going to the menu Events > Add time offset, enter 1,000 in the text box. Repeat for each other group, entering 2,000, then 3,000 etc.
Once done for Left_08, merge all these Left* events into a single Left category, and select Save modifications in the File menu in the Record tab.
Pre-process
In this tutorial, we will analyze only the Left trials (left-wrist extensions). In the following sections, we will process only the first 330 s of the recordings, where the left-wrist trials were performed.
Another idiosyncrasy of the present dataset is that the CTF MEG data were saved without the desired 3-rd order gradient compensation for optimal denoising. We will now apply this compensation as follows:
In the Process1 box: Drag and drop the Link to raw file node.
Run process Artifacts > Apply SSP & CTF compensation:
This process creates the SubjectCMC_clean folder that contains a copy of the channel file and a link to the raw file Raw | clean, which points to the original data and to the fact that the 3-rd order gradient compensation will be applied. Brainstorm does not create a physical copy of the actual, large dataset at this stage.
Removal of power line artifacts
We will start with identifying the spectral components of power line contamination of MEG and EMG recordings.
In the Process1 box: Drag and drop the Raw | clean node.
Run process Frequency > Power spectrum density (Welch):
Time window: 0 - 330 s
Window length=10 s
Overlap=50%
Sensor types=MEG, EMG
Double-click on the new PSD file to visualize the power spectrum density of the data.
- The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below. Peaks at 50Hz and its harmonics (100, 150, 200Hz and above) correspond to the European power line, and are clearly visible. We will use notch filters to attenuate power line contaminants at 50, 100 and 150 Hz.
In the Process1 box: Drag and drop the Raw | clean node.
Run the process Pre-processing > Notch filter with:
Sensor types = MEG, EMG
Frequencies to remove (Hz) = 50, 100, 150
A new raw folder named SubjectCMC_clean_notch is created. Estimate the PSD of these signals to appreciate the effect of the notch filters applied. As above, please remember to indicate a Time window restricted from 0 to 330 s in the options of the PSD process.
EMG pre-processing
Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying.
In the Process1 box: drag and drop the Raw | notch(50Hz 100Hz 150Hz) node.
Add the process Pre-process > Band-pass filter
Sensor types = EMG
Lower cutoff frequency = 10 Hz
Upper cutoff frequency = 0 Hz
Add the process Pre-process > Absolute values
Sensor types = EMG
- Run the pipeline
Two new folders SubjectCMC_clean_notch_high and SubjectCMC_clean_notch_high_abs are added to Brainstorm's database explorer. We can now safely delete folders that are not needed anymore:
Delete SubjectCMC_clean_notch and SubjectCMC_clean_notch_high by selecting both before pressing Delete (or right-click File > Delete).
MEG pre-processing
We need to remove more artifacts from the MEG traces via the:
Detection and removal of stereotypical artifacts with SSP
Detection of noisy (bad) data segments.
Detection and removal of artifacts with SSP (Signal Space Projection)
Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contaminationn of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the detection and removal of artifacts with SSP. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks.
Display the MEG and EOG time series: Right-click on the pre-processed (for EMG) continuous file Raw | clean | notch(... (in the SubjectCMC_clean_notch_high_abs folder) then MEG > Display time series and EOG > Display time series.
In the Events section of the Record tab, select Artifacts > Detect eye blinks, and use the parameters:
Channel name= EOG
Time window = 0 - 330 s
Event name = blink
Three categories of blink events are created. Review the traces of EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the blink group contains genuine eye blinks, and that groups blink2 and blink3 capture saccade events.
To remove blink artifacts with SSP go to Artifacts > SSP: Eye blinks, and use the parameters:
Event name=blink
Sensors=MEG
Check Compute using existing SSP/ICA projectors
- Display the time series and topographies of the first two (dominant) SSP components identified. In the present case, only the first SSP component can be clearly related to blinks. Select only component #1 for removal.
Follow the same procedure for the other blink events (blink2 and blink3). As mentioned above, none of the first two SSP components seem to be related to ocular artifacts. The figure below shows the visualization of the first two components for the blink2 group.
We therefore recommend to unselect the blink2 and blink3 groups from the Select Active Projectors panel (see below) rather than removing spatial components which nature remains ambiguous.
- Click on the large crosshair at the top right of the main Brainstorm window to close all visualization windows.
Detection of "bad" data segments:
Here we will use the automatic detection of artifacts to identify data segments contaminated by e.g., large eye and head movements and muscle contractions.
Display the MEG and EOG time series. In the Record tab, select Artifacts > Detect other artifacts and enter the following parameters:
Time window = 0 - 330 s
Sensor types=MEG
Sensitivity=3
Check both frequency bands 1-7 Hz and 40-240 Hz
We encourage users to review and validate the segments marked using this procedure. In the present case, the segments detected as bad clearly point at contaminated MEG data segments, which we will now label these as "bad".
Select the 1-7Hz and 40-240Hz event groups and select Events > Mark group as bad from the contextual menu. Alternatively, you can also rename the events created above and append the bad_ prefix to their name: Brainstorm will automatically discard these data segments from further processing.
- Close all visualization windows and reply "Yes" to the save the modifications query.
Importing data epochs
At this point we are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. We refer to these segments as epochs or trials. As mentioned previously, we will focus on the Left (wrist) category of events.
Right-click on the filtered continuous file Raw | clean | notch(... (in the SubjectCMC_clean_notch_high_abs condition), then Import in database.
- Enter the following parameter values:
Time window = 0 - 330 s
Check Use events and highlight the Left(x192) event group
Epoch time = 0 - 1000 ms
Check Apply SSP/ICA projectors
Check Remove DC offset and select All recordings
A new folder SubjectCMC_clean_notch_high_abs without the 'raw' indication is created for Subject01. Let's rename it as SubjectCMC_preprocessed.
Select the folder and press [F2] to set the new name. This action can also be done with the contextual menu File > Rename.
The new folder contains a copy of the channel file from the original raw file, and individual trials tagged as Left in a new trial group. Expand the trial group and note there are trials marked with a question mark in a red circle (ICON). These indicate trials that occurred in the bad segments identified in the previous section. All the bad trials are automatically ignored for further processing, whenever dropped into the Process1 and Process2 tabs.
To have a glance of the signals after the pre-processing, plot the MEG signal for one sensor overlying the left motor-cortex (MRC21) and the EMG signals, both for trial 1. Note that these traces are similar to the ones obtained in the FieldTrip tutorial.
Coherence 1xN (sensor level)
We will now compute the magnitude square coherence (MSC) between the left EMG signal and each of the MEG sensor data.
In the Process1 box, drag and drop the Left (192 files) trial group. Note that the number between square brackets is [185], as the 7 bad trials will be ignored by the MSC process.
Run the process Connectivity > Coherence 1xN [2021] with the following parameters:
Time window = 0 - 1000 ms or check All file
Source channel = EMGlft
Do not check Include bad channels nor Remove evoke response
Magnitude squared coherence
Window length for PSD estimation = 0.5 s
Overlap for PSD estimation = 50%
Highest frequency of interest = 80 Hz
Average cross-spectra of input files (one output file)
More details on the Coherence process can be found in the connectivity tutorial.
Double-click on the resulting node mscohere(0.6Hz,555win): EMGlft to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press Enter to plot it in a new figure. This spectrum corresponds to channel MRC21, and shows a large peak at 17.58 Hz. You can also use the frequency slider (under the Time panel) to explore the MSC output more precisely across frequencies.
Right-click on the spectrum and select 2D Sensor cap for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut Ctrl-T. The sensor locations can be displayed with a right-click and by selecting Channels > Display sensors from the contextual menu (shortcut Ctrl-E).
We can now average magnitude of the MSC across a frequency band of interest (15-20 Hz):
In the Process1 box, drag-and-drop the mscohere(0.6Hz,555win): EMGlft node, and add the process Frequency > Group in time or frequency bands with the parameters:
Select Group by frequency
Type cmc_band / 15, 20 / mean in the text box.
The resulting file mscohere(0.6Hz,555win): EMGlft | tfbands has only one MSC value for each sensor (the MSC average in the 15-20 Hz band). You may visualize the topography of this MSC statistics via 3 possible representations: 2D Sensor cap, 2D Sensor cap and 2D Disk, which are all accessible via a right-click over the MSC node. We clicked on sensor MRC21 below; it is shown in red.
We can observe higher MSC values between the EMG signal and MEG sensor signals over the contralateral set of central sensors in the beta band. Unfortunately, sensor-level connectivity present the disadvantages of: not being interpretable, and being subject to spurious results due to volume conduction.
In the next section we will compute coherence in the source level. To do this, we first need to estimate the sources time series from the sensor data.
MEG source modelling
We will perform source modelling using a distributed model approach for two possible source spaces: the cortex surface and the entire MRI volume. In the surface space, a source grid is made with the source located on the cortical surface obtained from the participant's MRI. For the volume space, the source grid consist of elementary sources uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive the sensor noise covariance matrix, and the head model.
Noise covariance
The recommendation for MEG, is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the noise covariance tutorial for more details.
In the raw SubjectCMC_clean_notch_high_abs node, right-click over Raw | clean | notch(...and select Noise covariance > Compute from recordings. Please enter the following parameters:
Baseline: from 18 to 30 s
Select the Block by block option.
Copy the Noise covariance (ICON) node to the SubjectCMC_preprocessed folder. This can be done using the shortcuts Ctrl-C and Ctrl-V.
Head model
The head model, aka forward model, accounts for how neural electrical currents (in a source space) produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. As the head model depends on the source space, a distinct head model is required for the surface and volume source spaces. Please refer to the head model tutorial for more in-depth explanations.
Surface
Go to the Anatomy view of the database and verify that the pial_15002V surface is the default (green characters) cortex surface. Otherwise, right-click on it and select Set as default cortex in the contextual menu.
Go back Functional data view of the database, and inside the SubjectCMC_preprocessedfolder, right-click over CTF channels (191) and select Compute head model from the contextual menu. Run the process with the options as indicated below:
Comment = Overlapping spheres (surface)
Source space = Cortex surface
Forward model = MEG Overlapping spheres.
The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined in the default cortex.
The Overlapping spheres (surface) head model (ICON) now appears in the database explorer.
Volume
In the SubjectCMC_preprocessedfolder, right-click over the CTF channels (191) node and select Compute head model. Set the option values to:
Comment = Overlapping spheres (volume)
Source space = MRI volume
Forward model = Overlapping spheres.
In the Volume source grid window, specify the following parameters that will produce around 11,500 source grid points across the brain volume.
Select Regular grid and Brain
Grid resolution = 5 mm
The Overlapping spheres (volume) head model is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.)
Source estimation
Now that the noise covariance and head model(s) are available, we will perform source estimation, to find the sources that gave origin to the signals registers in the sensors. From the diverse source estimation methods available in Brainstorm, in this tutorial the minimum-norm imaging method is used. The minimum-norm method estimates the linear combination of the current at each point in the source grid that explains the recorded sensor signals favouring minimum energy (L2-norm) solutions. As result, a large matrix called the imaging kernel is obtained. By multiplying the imaging kernel with the sensor data, it is possible to obtain the estimates of brain sources time series. A different imaging kernel is derived for each of the head models we have produced above: surface and volume. See the source estimation tutorial for more details.
Each dipole in the source grid may point arbitrarily in any direction in a 3D space.
Only for surface grids, the dipole orientation can be fixed to be normal to the cortical surface, this approach is based on anatomical observations of the brain cortex. The result is then in a smaller model that is faster to compute and display.
A discussion on constrained vs unconstrained sources is presented here.
Surface
Here we will estimate the sources in the surface space for constrained (normal to the cortex) and unconstrained dipole orientations.
Right-click on the Overlapping spheres (surface) head model and select Compute sources [2018]. Enter the following parameters:
Minimum norm imaging
Current density map
Constrained: Normal to the cortex
Comment = MN: MEG (surface)
Repeat the previous step, but this time select Unconstrained in the Dipole orientations field.
|
|
|
The inversion kernels (ICON) MN: MEG (surface)(Constr) 2018 and MN: MEG (surface)(Unconstr) 2018 are now available in the database explorer.
Volume
To compute the imaging kernel for the volume source space:
Right-click on the Overlapping spheres (volume) head model and select Compute sources [2018], with the following parameters:
Minimum norm imaging
Current density map
Unconstrained
Comment = MN: MEG (volume)
At this point the imaging kernel (ICON) MN: MEG (volume)(Unconstr) 2018 is now also available in the database explorer.
Note that each trial is associated with three source link (ICON) nodes, that correspond to each of the imaging kernels obtained above.
The source link nodes are not files containing the sources time series, instead, the links indicate Brainstorm to: load the corresponding MEG recordings, load the respective inverse kernel, and multiply the two on the fly to generate the sources time series. This approach saves space on the hard drive.
Coherence 1xN (source level)
Once we have computed the time series for the sources, it is time to compute coherence between the EMG signal and brain sources obtained with each of the imaging kernels.
Let's start with sources from the MN: MEG (surface)(Constr) kernel:
To select the source maps we want to include in the coherence estimation, click on the Search Database button (ICON), and select New search.
Set the search parameters as shown below, and click on Search.
This will create a new tab in the database explorer. This new tab contains only the files that match the search criteria.
Click the Process2 tab at the bottom of the main Brainstorm window and drag-and-drop the Left (192 files) trial group into the Files A box and repeat for the Files B box. Select Process recordings (ICON) for Files A, and Process sources (ICON) for Files B. The logic is that we will extract from the same files the EMG signal (Files A side) and the sources time series (Files B side), and then compute coherence between these two sets. Note that blue labels over the Files A and the Files B boxes indicate that there are 185 "good trial" files per box.
Open the Pipeline editor:
Add the process Connectivity > Coherence AxB [2021] with the following parameters:
Time window = 0 - 1000 ms or check All file
Source channel (A) = EMGlft
Uncheck Use scouts (B)
Do not Remove evoked responses from each trial
Magnitude squared coherence, Window length = 0.5 s
Overlap = 50%
Highest frequency = 80 Hz
Average cross-spectra.
Add the process File > Add tag with the following parameters:
Tag to add = (surface)(Constr)
Select Add to file name
- Run the pipeline
|
|
|
Once the processing is finish. Go to the Database tab of the database explorer and refresh it (with [F5]) to show the resulting 1xN connectivity file (ICON) mscohere(0.6Hz,555win): Left (#1) | (surface)(Constr).
Repeat the steps above to compute the EMG-sources coherence for the sources from the kernels MN: MEG (surface)(Unconstr) and MN: MEG (volume)(Unconstr). Do not forget to update the search criteria and the tag to be added to the result.
Results (Surface)
Double-click the 1xN connectivity files for the (surface) source space to show the results on the cortex. If you are not familiar with the options in the cortex figures, check Display: Cortex surface
Find the location and frequency with the highest coherence value.
Adjust the amplitude threshold to 99%
Explore with coherence spectra with frequency slider
The highest coherence value is located on the right primary motor cortex (precentral gyrus) at 14.65 Hz for the analysis using constrained and unconstrained sources. Set the the amplitude threshold to 0% to see the extension of the high coherence values.
|
|
|
MSC @ 14.65Hz (surface)(Constr) |
|
MSC @ 14.65Hz (surface)(Unconstr) |
We observe that results obtained with constrained and unconstrained sources agree in the location and frequency of the peak coherence. The main difference between these results is that coherence values obtained with unconstrained sources appear smoother, this caused by the maximum aggregation performed across directions (explained at detail in the next section.
Finally, right-click on any of the cortex figures and select Get coordinates. Then click on the right motor cortex with the crosshair cursor that appears. The SCS coordinates will be useful in the next section.
SCS coordinates X:38.6, Y:-21.3 and Z: 115.5
Results (Volume)
Double-click the 1xN connectivity file for the (volume) source space. Note that this time the results are shown in the MRI viewer rather than the cortical surface.
Set frequency slider to 14.65 Hz
Set the SCS coordinates to X:38.6, Y:-21.3 and Z: 115.5
Set the Transparency of the coherence values (Data) in the Surface tab to 30%
|
MSC @ 14.65Hz (volume)(Unconstr) |
We note that all the results obtained with constrained (surface) and unconstrained (surface and volume) sources agree with each other, and in the location and frequency of the peak coherence. Moreover, they are in agreement with the our hypothesis, previous results in the literature [REFS], and the results presented in the FieldTrip tutorial.
Coherence with constrained and unconstrained sources
For constrained sources, each vertex in the source grid is associated with ONE time series, as such, when coherence is computed with the EMG signal (also one time series), the result is ONE coherence spectrum per vertex. In other words, for each frequency bin, there is coherence brain map.
FIGURE (Diagram needed)
In the case of unconstrained sources, each vertex in the grid is associated with THREE time series, each one corresponding to the X, Y and Z directions. Thus, when coherence is computed with the EMG signal (one time series), there are THREE coherence spectra. This complicates its representation in the brain, thus, these THREE coherence spectra need to be flattened into one, resulting in one coherence spectrum per vertex, the maximum across directions is found for each frequency bin for each vertex.
FIGURE (Diagram needed)
An alternative approach present in the literature, (REF), to address the 3dimensional nature of the unconstrained sources, consists in flattening the vertex X, Y and Z time series before the coherence computation; resulting in a similar case as the constrained sources. Common methods for this flattening include: PCA (only first component is kept), and (Euclidean) norm. This flattening of the time series can be perform in Brainstorm with the process: Sources > Unconstrained to flat map.
- Flattened sources are saved as full rather than recordings+kernel.
- We have tested this flattening approach with simulations and the real data (from this tutorial) and we have found decremental effects on the expected results.
Coherence 1xN (scout level)
So far we have computed coherence at the source level, thus, a coherence spectrum is computed for each of the 15002 source points. This large dimension hinders later analysis of the results. Therefore, the strategy is to reduce the dimensionality of the source space by using a ?surface- or ?volume-parcellation scheme, in Brainstorm jargon this is an atlas that is made of scouts. See the scout tutorial for detail information on atlases and scouts in Brainstorm.
Under this approach, instead of providing one result (coherence spectrum) per source vertex, one result is computed for each scout. When computing coherence (or other connectivity metrics) in the scout level, it is necessary to provide two parameters that define how the data is aggregated per scout:
The scout function (mean is often used), and
When the within-scout aggregation takes place. Either before or after the coherence estimation.
Before: The scout function is applied for each direction on the vertices' source time series that make up a scout; resulting in one time series per direction per scout. Then, the scout time series are used to compute coherence with the reference signal (EMG in this tutorial), and the coherence spectra for each scout are aggregated across dimensions as shown in section XX, obtaining one coherence spectrum per scout.
- FIGURE (Diagram needed)
After: Coherence is computed between the reference signal and each direction of the vertices' source time series (as shown in section XX). Then, the scout function is applied on the coherence spectra for each direction of the vertices within a scout, finally these spectra are aggregated across dimensions to obtain a coherence spectrum per scout.
- FIGURE (Diagram needed)
As it can be seen, the After option takes longer and used more resources as it computes the coherence spectrum for each vertex in the scouts, and then, the coherence spectra are aggregated.
Let's here compute the coherence using scouts, using mean as scout function alongside with the Before option. We will use the ?Schaefer 100 parcellation atlas on the results from constrained sources.
Use Search Database () to select the Left trials with their respective (surface)(Constr) source maps, as shown in the ?previous section.
On the Process2 tab drag-and-drop the Left (192 files) trial group into the Files A and Files B boxes. Select Process recordings (ICON) for Files A, and Process sources (ICON) for Files B. There should be 185 files in each side.
Open the Pipeline editor:
Add the process Connectivity > Coherence AxB [2021] with the following parameters:
Time window = 0 - 1000 ms or check All file
Source channel (A) = EMGlft
Check Use scouts (B)
From the menu at the right, select Schaefer_100_17net
- Select all the scouts
Scout function: Mean
When to apply the scout function: Before
Do not Remove evoked responses from each trial
Magnitude squared coherence, Window length = 0.5 s
Overlap = 50%
Highest frequency = 80 Hz
Average cross-spectra.
Add the process File > Add tag with the following parameters:
Tag to add = (surface)(Constr)
Select Add to file name
- Run the pipeline
Double-click the file. This time it is not displayed on the cortex, but the coherence spectrum for each scout is show. Right click, and it can be also shown as image.
IMAGES
The choice of the optimal parcellation scheme for the source space is not easy.
The optimal choice is to choose a parcellation based on anatomy, for example the Brodmann parcellation.
In Brainstorm these atlases are imported in Brainstorm as scouts (cortical regions of interest), and saved directly in the surface files as explained in this tutorial here.
BRAINSTORM TEAM
Due to the current implementation of the bst_connectivity, the full source map for each trial (185) are loaded in memory, thus replicate the After, only for the (surface)(Constr) option ~30GB of RAM are needed! (Unconstrained take 3 times that).
Coherence NxN (scout level)
So far we have computed coherence between a reference signal (EMG) and brain sources. In theory this could be computed at source level, but it is not practical cause of the size of the results. In the previous section the coherence between a reference signal and multiples ROI (scouts) was computed. However, depending on the experimental setup, it may be necessary to compute a connectivity metric between each pair of brain ROIs, resulting in a connectome.
Thus a common approach is the use of scouts, as in the previous section. Here we want to write about performing NxN connectivity with the scouts. This should not be done with sources as it leads to a very big matrix that will not fit in the RAM
In our previous examples, the reference signal was 1 dimension, but here it's 3D thus it makes the processing more complex, as there are 9 coherence spectra:
Before: Scout aggregation, 3x3 for each scout pair, then the dimension aggregation.
FIGURE (Diagram needed)
After: Each source vs each source, thus there are 9 coherence spectra for each source-pair! then the results are aggregated for all the pairs, giving 9 coherence spectra between scouts, then it is aggregated across dimension.
FIGURE (Diagram needed)
Here the After option requires more memory than the previous section. Thus we will perform only the Before option.
Script
Additional documentation
Articles
Conway BA, Halliday DM, Farmer SF, Shahani U, Maas P, Weir AI, et al.
Synchronization between motor cortex and spinal motoneuronal pool during the performance of a maintained motor task in man.
The Journal of Physiology. 1995 Dec 15;489(3):917–24.Kilner JM, Baker SN, Salenius S, Hari R, Lemon RN.
Human Cortical Muscle Coherence Is Directly Related to Specific Motor Parameters.
J Neurosci. 2000 Dec 1;20(23):8838–45.Liu J, Sheng Y, Liu H.
Corticomuscular Coherence and Its Applications: A Review.
Front Hum Neurosci. 2019 Mar 20;13:100.
Tutorials
Tutorial: Volume source estimation
Tutorial: Functional connectivity
Forum discussions