52027
Comment:
|
47301
|
Deletions are marked like this. | Additions are marked like this. |
Line 4: | Line 4: |
'''[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE] ''' ''Authors: Raymundo Cassani, Francois Tadel & Sylvain Baillet.'' [[https://en.wikipedia.org/wiki/Corticomuscular_coherence|Corticomuscular coherence]] measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signal recorded from muscle activity during voluntary movement. This signal similarity is due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and muscles. For consistency and reproducibility purposes across major software toolkits, the present tutorial replicates the processing pipeline "[[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]]" by FieldTrip. <<TableOfContents(2,2)>> == Background == [[Tutorials/Connectivity#Coherence|Coherence]] measures the linear relationship between two signals in the frequency domain. Previous studies ([[https://dx.doi.org/10.1113/jphysiol.1995.sp021104|Conway et al., 1995]], [[https://doi.org/10.1523/JNEUROSCI.20-23-08838.2000|Kilner et al., 2000]]) have reported corticomuscular coherence effects in the 15–30 Hz range during maintained voluntary contractions. IMAGE OF EXPERIMENT, SIGNALS and COHERENCE |
''Authors: Raymundo Cassani, Francois Tadel, [[https://www.neurospeed-bailletlab.org/sylvain-baillet|Sylvain Baillet]].'' [[https://en.wikipedia.org/wiki/Corticomuscular_coherence|Corticomuscular coherence]] measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signals recorded from muscle activity during voluntary movements. Cortical-muscular signals similarities are conceived as due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and the muscles attached to the moving limb(s). For consistency purposes, the present tutorial replicates, with Brainstorm tools, the processing pipeline "[[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]]" of the FieldTrip toolbox. <<TableOfContents(3,2)>> |
Line 18: | Line 11: |
The dataset comprises MEG recordings (151-channel CTF MEG system) and bipolar EMG recordings (from left and right extensor carpi radialis longus muscles) from one participant who was tasked to lift their hand and exert a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either the left or right wrist. EOG signals were also recorded, which will be useful for detection and attenuation of ocular artifacts. We will analyze the data from the left-wrist trials in the present tutorial. Replicating the pipeline with right-wrist data is a good exercise to do next! | The dataset is identical to that of the FieldTrip tutorial: [[https://www.fieldtriptoolbox.org/tutorial/coherence/|Analysis of corticomuscular coherence]]: * One participant, * MEG recordings: 151-channel CTF MEG system, * Bipolar EMG recordings: from left and right extensor carpi radialis longus muscles, * EOG recordings: used for detection and attenuation of ocular artifacts, * MRI: 1.5T Siemens system, * Task: The participant lifted their hand and exerted a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either their left or right hand. * Here we describe relevant Brainstorm tools via the analysis of the left-wrist trials. We encourage the reader to practice further by replicating the same pipeline using right-wrist trials! Corticomuscular coherence: * [[Tutorials/Connectivity#Coherence|Coherence]] measures the linear relationship between two signals in the frequency domain. * Previous studies ([[https://dx.doi.org/10.1113/jphysiol.1995.sp021104|Conway et al., 1995]], [[https://doi.org/10.1523/JNEUROSCI.20-23-08838.2000|Kilner et al., 2000]]) have reported corticomuscular coherence effects in the 15–30 Hz range during maintained voluntary contractions. * TODO: IMAGE OF EXPERIMENT, SIGNALS and COHERENCE |
Line 21: | Line 28: |
* '''Requirements''': * Please make sure you have completed the [[Tutorials|get-started tutorials]] and that you have a working copy of Brainstorm installed on your computer. * In addition, you need to [[Tutorials/SegCAT12#Install_CAT12|install CAT12 plugin]], used here for MRI segmentation. * '''Download the dataset''': * Download `SubjectCMC.zip` from FieldTrip FTP server:<<BR>> ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/SubjectCMC.zip * Unzip it in a folder not located in any of Brainstorm folders (the app or its database). * '''Brainstorm''': * Launch Brainstorm (via Matlab command line or the Matlab-free stand-alone version). * Select the menu '''''File > Create new protocol'''''. Name it `TutorialCMC` and select the options:<<BR>> '''No, use individual anatomy''', <<BR>> '''No, use one channel file per acquisition run'''. == Importing and processing anatomy data == * Right-click on the newly created '''TutorialCMC''' node in your Brainstorm data tree then: <<BR>> '''''New subject > Subject01'''''.<<BR>>Keep the default options defined for the study (aka "protocol" in Brainstorm jargon). * Switch to the '''Anatomy''' view of the protocol (<<Icon(iconSubjectDB.gif)>>). * Right-click on the '''Subject01''' node then '''''Import MRI''''': * Select the adequate file format from the pull-down menu: '''All MRI file (subject space)''' |
'''Pre-requisites''' * Please make sure you have completed the [[Tutorials|get-started tutorials]], prior to going through the present tutorial. * Have a working copy of Brainstorm installed on your computer. * [[Tutorials/SegCAT12#Install_CAT12|Install the CAT12]] Brainstorm plugin, to perform MRI segmentation from the Brainstorm dashboard. '''Download the dataset''' * Download `SubjectCMC.zip` from the FieldTrip download page:<<BR>> https://download.fieldtriptoolbox.org/tutorial/SubjectCMC.zip * Unzip the downloaded archive file in a folder outside the Brainstorm database or app folders (for instance, directly on your desktop). '''Brainstorm''' * Launch Brainstorm (via the Matlab command line or the Matlab-free stand-alone version of Brainstorm). * Select from the menu '''File > Create new protocol'''. Name the new protocol `TutorialCMC` and select the options:<<BR>> No, use individual anatomy, <<BR>> No, use one channel file per acquisition run. == Importing anatomy == * Right-click on the newly created TutorialCMC node > '''New subject > Subject01'''.<<BR>>Keep the default options defined for the study (aka "protocol" in Brainstorm's vernacular). * Switch to the Anatomy view of the protocol (<<Icon(iconSubjectDB.gif)>>). * Right-click on the Subject01 > '''Import MRI''': * Select the file format: '''All MRI files (subject space)''' |
Line 39: | Line 52: |
* This will open the '''MRI viewer''' showing the coronal, sagittal and axial views of the MRI. In addition, [[CoordinateSystems|three anatomical fiducials]]: left and right pre-auricular points (LPA and RPA), and nasion (NAS) are automatically identified. These fiducials are located near the left/right ears and just above the nose respectively. Click on '''Save'''. . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=mri_viewer.png|{{attachment:mri_viewer.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=mri_viewer.png}}]] We then need to segment the head tissues to obtain the surfaces required to derive a realistic MEG [[Tutorials/HeadModel|head models (aka "forward models")]]. Here, we will perform [[Tutorials/SegCAT12|MRI segmentation with CAT12]], this process takes between 30 to 60 minutes. * Right-click on the '''SubjectCMC''' MRI node (<<Icon(iconMri.gif)>>), then select '''''MRI segmentation > CAT12: Cortex, atlases, tissues'''''. This will prompt a series of windows to set the parameters for the MRI segmentation, use the following parameters: * '''Number of vertices on the cortex surface''' use `15000` * '''Compute anatomical parcellations?''' select `Yes` * '''Compute cortical maps''' select {{{No}}} . {{attachment:cat12.png||width="100%"}} CAT12 computes the '''scalp''' surface (<<Icon(iconScalp.gif)>>), and the '''cortex''' surfaces (<<Icon(iconCortexCut.gif)>>) for white matter, pial envelope and the midpoint between them. The default surface of each type is indicated in green. Moreover, as part of the MRI segmentation with CAT12, the anatomy data is normalized in the MNI space, and several anatomical parcellations (<<Icon(iconMriAtlas.gif)>>) are computed. For further information on the anatomy files see the [[Tutorials/ExploreAnatomy|Display the anatomy tutorial]]. . {{attachment:import_result.png||width="40%"}} * Double-click on the '''head mask''' and then on the '''central_15002''' surface to display them. * In addition, the registration between the MRI and the surfaces can be checked with context menu '''MRI registration > Check MRI/surface registration...''' . {{attachment:fig_anat_srf.png||width="60%"}} == Review the MEG and EMG recordings == === Link the recordings to Brainstorm database === * Switch now to the '''Functional data''' view (<<Icon(iconStudyDBSubj.gif)>>). * Right-click on the '''Subject01''' node then '''''Review raw file''''': * Select the file format of current data from the pulldown menu options:<<BR>> '''MEG/EEG: CTF(*.ds; *.meg4; *.res4)''' * Select the file: `SubjectCMC.ds` A new folder is now created in Brainstorm database explorer and contains: * '''SubjectCMC''': a folder that provides access to the MEG dataset. Note the "RAW" tag over the icon of the folder (<<Icon(iconRawFolderClose.gif)>>), indicating the files contain unprocessed, continuous data. * '''CTF channels (191)''': a node containing '''channel information''' with all channel types, names, locations, etc. The number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses '''(here, 191)'''. * '''Link to raw file''' provides access to '''the original data file'''. All the relevant metadata was read from the dataset and copied inside the node itself (e.g., sampling rate, number of time samples, event markers). Note that Brainstorm logic is not to import/duplicate the raw unprocessed data directly into the database. Instead, Brainstorm provides a link to that raw file for further review and data extraction ([[Tutorials/ChannelFile#Review_vs_Import|more information]]). |
* This step will launch Brainstorm's MRI viewer, where coronal, sagittal and axial cross-sections of the MRI volume are displayed. note that [[CoordinateSystems|three anatomical fiducials]] (left and right pre-auricular points (LPA and RPA), and nasion (NAS)) are automatically identified. These fiducials are located near the left/right ears and just above the nose, respectively. Click on '''Save'''. <<BR>><<BR>> {{attachment:mri_viewer.gif||width="344",height="295"}} * In all typical Brainstorm workflows from this tutorial handbook, we recommend processing the MRI volume at this stage, before importing the functional (MEG/EEG) data. However, we will proceed differently, for consistency with the original FieldTrip pipeline, and readily obtain sensor-level coherence results. We will proceed with MRI segmentation below, before performing source-level analyses. * We still need to verify the proper geometric registration (alignment) of MRI with MEG. We will therefore now extract the scalp surface from the MRI volume. * Right-click on the MRI (<<Icon(iconMri.gif)>>) > MRI segmentation > '''Generate head surface'''. <<BR>><<BR>> {{attachment:head_process.gif}} * Double-click on the newly created surface to display the scalp in 3-D.<<BR>><<BR>> {{attachment:head_display.gif}} == MEG and EMG recordings == === Link the recordings === * Switch now to the '''Functional data '''view of your database contents (<<Icon(iconStudyDBSubj.gif)>>). * Right-click on Subject01 > '''Review raw file''': * Select the appropriate MEG file format: '''MEG/EEG: CTF(*.ds; *.meg4; *.res4)''' * Select the data file: `SubjectCMC.ds` * A new folder '''SubjectCMC '''is created in the Brainstorm database explorer. Note the "RAW" tag over the icon of the folder (<<Icon(iconRawFolderClose.gif)>>), indicating that the MEG files contain unprocessed, continuous data. This folder includes: * '''CTF channels (191)''' is a file with all channel information, including channel types (MEG, EMG, etc.), names, 3-D locations, etc. The total number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses. * '''Link to raw file '''is a link to the original data file. Brainstorm reads all the relevant metadata from the original dataset and saves them into this symbolic node of the data tree (e.g., sampling rate, number of time samples, event markers). As seen elsewhere in the tutorial handbook, Brainstorm does not create copies by default of (potentially large) unprocessed data files ([[Tutorials/ChannelFile#Review_vs_Import|more information]]). |
Line 76: | Line 73: |
Often simply known as '''registration''', is the alignment of the sensors on the anatomy of the subject. For this tutorial data registration is carried out using '''only''' three anatomical landmarks present in the MRI and MEG data. According to the description of the data in the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]. <<BR>> . "To measure the head position with respect to the sensors, three coils were placed at anatomical landmarks of the head (nasion, left and right ear canal). [...] During the MRI scan, ear molds containing small containers filled with vitamin E marked the same landmarks. This allows us, together with the anatomical landmarks, to align source estimates of the MEG with the MRI." |
* This registration step is to align the MEG coordinate system with the participant's anatomy from MRI ([[https://neuroimage.usc.edu/brainstorm/Tutorials/ChannelFile#Automatic_registration|more info]]). Here we will use only the three anatomical landmarks stored in the MRI volume and specific at the moment of MEG data collection. From the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]:<<BR>>''"To measure the head position with respect to the sensors, three coils were placed at anatomical landmarks of the head (nasion, left and right ear canal). [...] During the MRI scan, ear molds containing small containers filled with vitamin E marked the same landmarks. This allows us, together with the anatomical landmarks, to align source estimates of the MEG with the MRI." '' * To visually appreciate the correctness of the registration, right-click on the '''CTF channels''' node > '''MRI registration > Check'''. This opens a 3-D figure showing the inner surface of the MEG helmet (in yellow), the head surface, the fiducial points and the axes of the [[CoordinateSystems#Subject_Coordinate_System_.28SCS_.2F_CTF.29|subject coordinate system (SCS)]].<<BR>><<BR>> {{attachment:fig_registration.gif||width="209",height="204"}} === Reviewing === * Right-click on the Link to raw file > '''Switch epoched/continuous''' to convert how the data i stored in the file to a continuous reviewing format, a technical detail proper to CTF file formatting. * Right-click on the Link to raw file > '''MEG > Display time series''' (or double-click on its icon). This will display data time series and enable the Time panel and the Record tab in the main Brainstorm window (see specific features in "[[Tutorials/ReviewRaw|explore data time series]]"). * Right-click on the Link to raw file > '''EMG > Display time series'''. . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=timeseries_meg_emg.png|{{attachment:timeseries_meg_emg.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=timeseries_meg_emg.png}}]] === Event markers === The colored dots at the top of the figure, above the data time series, indicate [[Tutorials/EventMarkers|event markers]] (or triggers) saved along with MEG data. The actual onsets of the left- and right-wrist trials are not displayed yet: they are saved in an auxiliary channel of the raw data named ''Stim''. To add these markers to the display, follow this procedure: * With the time series figure still open, go to the Record tab and select '''File > Read events from channel'''. Event channels = `Stim`, select Value, Reject events shorter than 12 samples. Click '''Run'''. . . {{attachment:read_evnt_ch.gif}} * The rejection of short events is nececessary in this dataset, because transitions between values in the Stim channel may span over several time samples. Otherwise, and for example, an event U1 would be created at 121.76s because the transition from the event U1025 back to zero features two unwanted values at the end of the event. The rejection criteria is set to 12 time samples (10ms), because the duration of all the relevant triggers is longer than 15ms. This value is proper to each dataset, so make sure you verify trigger detections from your own dataset.<<BR>><<BR>> {{attachment:triggers_min_duration.gif}} * New event markers are created and now shown in the Events section of the tab, along with previous event categories. In this tutorial, we will only use events '''U1''' through '''U25''', which correspond to the beginning of each of the 25 left-wrist trials of 10 seconds. Following FieldTrip's tutorial, let's reject trial #7, event '''U7'''. * Delete all unused events: Select all the events '''except''' '''U1-U6''' and '''U8-25''' (Ctrl+click / Shift+click), then menu '''Events > Delete group''' (or press the Delete key). * Merge events: Select all the event groups, then select from the menu '''Events > ''' Merge group > '''"Left"'''. A new event category called "Left" now indicate the onsets of 24 trials of left-wrist movements.<<BR>><<BR>> {{attachment:left_24.gif}} == Pre-processing == |
Line 81: | Line 102: |
In Brainstorm, when linking raw recordings, the alignment of the sensors and the anatomy is performed with [[Tutorials/ChannelFile#Automatic_registration|automatic registration]] when [[CoordinateSystems|fiducial points]] and [[Tutorials/TutDigitize|digitized head points]] are available. | In this tutorial, we will analyze only the '''Left''' trials (left-wrist extensions). In the following sections, we will process only the first '''330 s''' of the original recordings, where the left-wrist trials were performed. |
Line 83: | Line 104: |
To verify the registration: * Right-click on the '''CTF channels (191)''' node, then select '''''MRI registration > Check'''''. This opens a 3D figure showing the inner surface of the MEG helmet, the head surface, and the fiducials and axes that comprise the [[CoordinateSystems#Subject_Coordinate_System_.28SCS_.2F_CTF.29|subject coordinate system (SCS)]]. . {{attachment:fig_registration.png||width="50%"}} === Reviewing continuous recordings === * Right-click on the '''Link to raw file''' node, then '''''Switch epoched/continuous''''' to convert the file to '''continuous''', a technical detail proper to CTF file formatting. * Right-click again on the '''Link to raw file''' node, then '''''MEG > Display time series''''' (or double-click on the node). This will open a new visualization window to explore data time series, also enabling the '''''Time''''' panel and the '''''Record''''' tab in the main Brainstorm window (see how to best use all controls in this panel and tab to [[Tutorials/ReviewRaw|explore data time series]]). * We will also display EMG traces by right-clicking on the '''Link to raw file''' node, then '''''EMG > Display time series'''''. . [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=timeseries_meg_emg.png|{{attachment:timeseries_meg_emg.png|https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=timeseries_meg_emg.png}}]] === Event markers === The colored dots above the data time series indicate [[Tutorials/EventMarkers|event markers]] (or triggers) saved with this dataset. The trial onset information of the left-wrist and right-wrist trials is saved in an auxiliary channel of the raw data named '''Stim'''. To add these markers, these events need to be decoded as follows: * While the time series figure is open, go to the '''''Record''''' tab and '''''File > Read events from channel'''''. From the options of the '''Read from channel''' process window, set '''Event channels''' = `Stim`, select '''Value''', and click '''Run'''. . {{attachment:read_evnt_ch.png}} This procedure creates new event markers now shown in the '''''Events''''' section of the tab, along with previous event categories. In this tutorial, we will only use events '''U1''' through '''U25''', which correspond to the beginning of each of the 25 trials of 10 seconds with left-wrist movements. To make sure we reproduce FieldTrip tutorial, we need to reject trial #7, event '''U7'''. We will now delete other events of no interest, and merge the left trial events under a single event category, for convenience: * Select the events to delete in the event box/list with '''Ctrl+click''' / '''Shift+click''', then in the menu '''''Events > Delete group''''' and confirm. Alternatively, you can select all events with '''Ctrl+A''' and deselect the '''U1''' to '''U25''' events by clicking on them. * Select events '''U1''' to '''U6''' and '''U8''' to '''U25''', (remember that '''U7''' will not be used) then from the '''Events '''menu, select''' Merge group''' and type in new label ('''Left''') to describe this as the left-wrist condition. . {{attachment:left_24.png}} These events correspond to the beginning of 10-second trials of left-wrist movements. {{{#!wiki warning TO BE REMOVED: We will compute coherence over 1-second epochs over the first 8s of each trial. To that purpose, we will now create extra events to define these epochs. * Duplicate 7 times the '''Left''' events by selecting '''''Duplicate group''''' in the '''''Events''''' menu. The groups '''Left_02''' to '''Left_08''' will be created. * For each copy of the '''Left''' events, we will add a time offset of 1 s for '''Left02''', 2 s for '''Left03''', and so on. Select the '''Left '''event group to add a 1,000 ms time offset, by going to the menu '''''Events > Add time offset'', '''enter 1,000 in the text box. Repeat for each other group, entering 2,000, then 3,000 etc. . {{attachment:dup_offset.png}} * Once done for '''Left_08''', merge all these '''Left*''' events into a single '''Left '''category, and select '''''Save modifications''''' in the '''''File''''' menu in the '''''Record''''' tab. . {{attachment:left_192.png}} }}} == Pre-process == {{{#!wiki note In this tutorial, we will analyze only the '''Left''' trials (left-wrist extensions). In the following sections, we will process only the first '''330 s''' of the recordings, where the left-wrist trials were performed. }}} {{{#!wiki warning TO BE REMOVED: Another idiosyncrasy of the present dataset is that the CTF MEG data were saved without the desired 3-rd order gradient compensation for optimal denoising. We will now apply this compensation as follows: * In the '''''Process1''''' box: Drag and drop the '''Link to raw file''' node. * Run process '''''Artifacts > Apply SSP & CTF compensation''''':<<BR>> . {{attachment:pro_ctf_compensation.png||width="60%"}} This process creates the '''SubjectCMC_clean''' folder that contains a copy of the '''channel file''' and a link to the raw file '''Raw | clean''', which points to the original data and to the fact that the 3-rd order gradient compensation will be applied. Brainstorm does not create a physical copy of the actual, large dataset at this stage. . {{attachment:tre_raw_clean.png||width="40%"}} }}} === Removal of power line artifacts === We will start with identifying the spectral components of power line contamination of MEG and EMG recordings. * In the '''''Process1''''' box: Drag and drop the '''Link to raw file''' node. |
=== Power line artifacts === * In the Process1 box: Drag and drop the '''Link to raw file'''. |
Line 154: | Line 107: |
* '''Time window''': `0 - 330 s` * '''Window length='''`10 s` * '''Overlap'''=`50%` * '''Sensor types'''=`MEG, EMG` |
* '''Time window''': `0-330 s` * '''Window length''':''' '''`10 s` * '''Overlap''': `50%` * '''Sensor types''': `MEG, EMG` |
Line 161: | Line 114: |
* Double-click on the new '''PSD''' file to visualize the power spectrum density of the data.<<BR>> | * Double-click on the new PSD file to visualize the power spectrum density of the data.<<BR>> |
Line 164: | Line 117: |
* The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below. Peaks at 50Hz and its harmonics (100, 150, 200Hz and above) correspond to the European power line and are clearly visible. We will use notch filters to attenuate power line contaminants at 50, 100 and 150 Hz. * In the '''''Process1''''' box: Drag and drop the '''Raw | clean''' node. * Run the process '''''Pre-processing > Notch filter''''' with: <<BR>> |
* The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below (black lines). Peaks at 50Hz and harmonics (100, 150, 200Hz and above) indicate the European power line frequency and are clearly visible. We will now use notch filters to attenuate power line contaminants at 50, 100 and 150 Hz. * In the Process1 box: Drag and drop the '''Raw | clean''' node. * Run the process '''Pre-processing > Notch filter''' with: <<BR>> |
Line 169: | Line 122: |
* '''Sensor types''' = `MEG, EMG` * '''Frequencies to remove (Hz)''' = `50, 100, 150` |
* '''Sensor types''': `MEG, EMG` * '''Frequencies to remove (Hz)''': `50, 100, 150` |
Line 174: | Line 127: |
{{{#!wiki note The '''Process the entire file at once''' option applies the '''CTF compensation''' on the data if it has not been already applied. As drawback the entire file will be loaded into memory. If RAM is insufficient, the CTF compensation can be applied independently with the process '''Artifacts > Apply SSP & CTF compensation''', see [[Tutorials/TutMindNeuromag#Existing_SSP_and_pre-processing|more information]] }}} A new '''raw''' folder named '''SubjectCMC_clean_notch''' is created. Estimate the PSD of these signals to appreciate the effect of the notch filters applied. As above, please remember to indicate a '''Time window''' restricted from 0 to 330 s in the options of the PSD process. |
* In case you get a memory error message:<<BR>>These MEG recordings have been saved before applying the CTF 3rd-order gradient compensation, for noise reduction. The compensation weights are therefore applied on the fly when Brainstorm reads data from the file. However, this requires reading all the channels at once. By default, the frequency filter are optimized to process channel data sequentially, which is incompatible with applying the CTF compensation on the fly. This setting can be overridden with the option '''Process the entire file at once''', which requires loading the entire file in memory at once, which may crash teh process depending on your computing resources (typically if your computer's RAM < 8GB). If this happens: run the process '''Artifacts > Apply SSP & CTF compensation''' on the file first, then rune the notch filter process without the option "Process the entire file at once" ([[https://neuroimage.usc.edu/brainstorm/Tutorials/TutMindNeuromag#Existing_SSP_and_pre-processing|more information]]). * A new folder named '''SubjectCMC_clean_notch''' is created. Obtain the PSD of these data to appreciate the effect of the notch filters. As above, please remember to indicate a '''Time window''' restricted from 0 to 330 s in the options of the PSD process.<<BR>><<BR>> |
Line 181: | Line 131: |
=== EMG pre-processing === | === EMG: Filter and rectify === |
Line 184: | Line 134: |
* In the '''''Process1''''' box: drag and drop the '''Raw | notch(50Hz 100Hz 150Hz)''' node. * Add the process '''''Pre-process > Band-pass filter''''' |
* In the Process1 box: drag and drop the '''Raw | notch(50Hz 100Hz 150Hz)''' node. * Add the process '''Pre-process > Band-pass filter''' |
Line 190: | Line 140: |
* Add the process '''''Pre-process > Absolute values''''' | * Add the process '''Pre-process > Absolute values''' |
Line 197: | Line 147: |
Two new folders '''SubjectCMC_notch_high''' and '''SubjectCMC_notch_high_abs''' are added to Brainstorm database explorer. We can now safely delete folders that are not needed anymore: * Delete '''SubjectCMC_notch''' and '''SubjectCMC_notch_high '''by selecting both before pressing Delete (or right-click '''''File > Delete'''''). === MEG pre-processing === ==== Detection and removal of artifacts with SSP ==== Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contamination of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the dedicated tutorials about the [[Tutorials/ArtifactsDetect|detection]] and [[Tutorials/ArtifactsSsp|removal of artifacts with SSP]]. The present tutorial dataset features an EOG channel but no ECG. We will perform only the removal of eye blinks. * Right-click on the pre-processed file, ie '''Raw | notch(50Hz 100Hz 150Hz) | high(10Hz) | abs''', then select '''''MEG > Display time series''''' and '''''EOG > Display time series'''''. * In the '''Events''' section of the '''''Record''''' tab, select '''''Artifacts > Detect eye blinks''''', and use the parameters: |
* Delete intermediate files that won't be needed anymore: Select folders '''SubjectCMC_notch''' and '''SubjectCMC_notch_high''',''' '''then press the Delete key (or right-click > File > Delete).<<BR>><<BR>> {{attachment:db_filters.gif}} === MEG: Blink SSP and bad segments === Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contamination of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the specific tutorial sections about the [[Tutorials/ArtifactsDetect|detection]] and [[Tutorials/ArtifactsSsp|removal of artifacts with SSP]]. The present tutorial dataset features an EOG channel but no ECG. We will therefore only remove artifacts caused by eye blinks. ==== Blink correction with SSP ==== * Right-click on the pre-processed file > '''MEG > Display time series''' and '''EOG > Display time series'''. * In the Record tab: '''Artifacts > Detect eye blinks''', and use the parameters: |
Line 214: | Line 162: |
* Three categories of blink events are created. Review the traces of EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the '''blink''' group contains genuine eye blinks, and that groups blink2 and blink3 capture saccade events. | * Three categories of blink events are created. Review the traces of the EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the '''blink''' group contains genuine eye blinks, and that groups blink2 and blink3 capture saccades. |
Line 218: | Line 166: |
* To [[Tutorials/ArtifactsSsp|remove blink artifacts with SSP]] go to '''''Artifacts > SSP: Eye blinks''''', and use the parameters: | * To [[Tutorials/ArtifactsSsp|remove blink artifacts with SSP]], go to '''Artifacts > SSP: Eye blinks''': |
Line 221: | Line 169: |
* Check '''Compute using existing SSP/ICA projectors''' | |
Line 225: | Line 172: |
* Display the time series and topographies of the first two (dominant) SSP components identified. In the present case, only the first SSP component can be clearly related to blinks. Select only component #1 for removal. | * Display the time series and topographies of the first two SSP components identified. In the present case, only the first SSP component can be clearly related to blinks: the percentage of variance explained is substantially higher than the other compoments', the spatial topography of the component is also typical of eye blinks, and the corresponding time series is similar to the EOG signal around blinks. Select only '''component #1''' for removal. |
Line 229: | Line 176: |
* Follow the same procedure for the other blink events ('''blink2''' and '''blink3'''). As mentioned above, none of the first two SSP components seem to be related to ocular artifacts. The figure below shows the visualization of the first two components for the '''blink2''' group. . {{attachment:ssp_blink2.png||width="100%"}} . We therefore recommend unselecting the '''blink2''' and '''blink3''' groups from the '''Select Active Projectors''' panel (see below) rather than removing spatial components which nature remains ambiguous. . {{attachment:ssp_active_projections.png||width="60%"}} * Click on the large '''×''' at the top right of the main Brainstorm window to close all visualization windows. ==== Detection of "bad" data segments: ==== Here we will use the [[Tutorials/BadSegments#Automatic_detection|automatic detection of artifacts]] to identify data segments contaminated by e.g., large eye and head movements and muscle contractions. * Display the MEG and EOG time series. In the '''''Record''''' tab, select '''''Artifacts > Detect other artifacts''''' and enter the following parameters: |
* Close all visualization figures by clicking on the large '''×''' at the top-right of the main Brainstorm window. ==== Detection of "bad" data segments ==== Here we will use the [[Tutorials/BadSegments#Automatic_detection|automatic detection of artifacts]] to identify data segments contaminated by e.g., large eye and head movements, or muscle contractions. * Display the MEG and EOG time series. In the '''Record''' tab, select '''Artifacts > Detect other artifacts''' and enter the following parameters: |
Line 250: | Line 189: |
We encourage users to review and validate the segments marked using this procedure. In the present case, the segments detected as bad clearly point at contaminated MEG data segments. We will now label these as "bad". * Select the '''1-7Hz''' and '''40-240Hz''' event groups and select '''Events > Mark group as bad''' from the contextual menu. Alternatively, you can also rename the events created above and append the '''bad_''' prefix to their name: Brainstorm will automatically discard these data segments from further processing. |
* You are encouraged to review all the segments marked using this procedure. With the present data, all marked segments do contain clear artifacts. * Select the '''1-7Hz''' and '''40-240Hz''' event groups and select '''Events > Mark group as bad'''. Alternatively, you can add the prefix '''bad_''' to the event names. Brainstorm will automatically discard these data segments from further processing. |
Line 258: | Line 197: |
== Importing data epochs == At this point we are finished with the pre-processing of the EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. We refer to these segments as '''epochs''' or '''trials'''. As mentioned previously, we will focus on the '''Left''' (wrist) category of events. * Right-click on the filtered continuous file '''Raw | clean | notch(...''' (in the '''SubjectCMC_clean_notch_high_abs''' condition), then '''''Import in database'''''. . {{attachment:import_menu.png||width="40%"}} * Enter the following parameter values: |
== Epoching == We are now finished with the pre-processing of EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. As mentioned previously, we will focus on the '''Left''' category of events (left-wrist movements). For consistency with the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]], we will analyze 8s of recordings following each movement (from the original 10s around each trial), and split them in 1-s epochs. For each 1-s epoch, the DC offset will be also removed from each MEG channel. And finally, we will uniform the time vector for all 1-s epochs. This last steps is needed as the time vector of each 1-s vector is with respect to the event that was used to import the 8-s trial. * In the Process1 box: Drag-and-drop the pre-processed file. * Select the process '''Import > Import recordings > Import MEG/EEG: Events''': * '''Subject name''' = `Subject01` * '''Folder name''' = empty * '''Event names''' = `Left` |
Line 267: | Line 207: |
* Check '''Use events''' and highlight the '''Left(x192)''' event group * '''Epoch time''' = `0 - 1000 ms` * Check '''Apply SSP/ICA projectors''' * Check '''Remove DC offset''' and select '''All recordings''' . {{attachment:import_options.png||width="80%"}} A new folder '''SubjectCMC_clean_notch_high_abs''' without the 'raw' indication is created for '''Subject01'''. Let's rename it as '''SubjectCMC_preprocessed'''. * Select the folder and press '''[F2]''' to set the new name. This action can also be done with the contextual menu '''File > Rename'''. The new folder contains a copy of the '''channel file''' from the original raw file, and individual trials tagged as '''Left '''in a new trial group. Expand the trial group and note there are trials marked with an exclamation mark in a red circle (<<Icon(iconModifBad.gif)>>). This indicates trials that contain segments of signal occurred in the '''bad''' segments identified in the previous section. All the bad trials are automatically ignored for further processing, whenever dropped into the '''''Process1''''' and '''''Process2''''' tabs. |
* '''Epoch time''' = `0 - 8000 ms` * '''Split recordings in time blocks''' = `1 s` * Uncheck '''Create a separate folder for each event type''' * Check '''Ignore shorter epochs''' * Check '''Use CTF compensation''' * Check '''Use SSP/ICA projectors''' * Add the process '''Pre-process > Remove DC offset''': * '''Baseline''' = `All file` * '''Sensor types''' = `MEG` * Check '''Overwrite input files''' * Add the process '''Standardize > Uniform epoch time''': * '''Interpolation method''' = `spline` * Check '''Overwrite input files''' * Run the pipeline || {{attachment:pro_import.png}} || || {{attachment:pro_remove_dc.png}} || * A new folder '''SubjectCMC_notch_high_abs''' without the 'raw' indication is now created, which includes '''192 individual epochs''' (24 trials x 8 1-s epochs each). The epochs that overlap with a "bad" event are also marked as bad, as shown with an exclamation mark in a red circle (<<Icon(iconModifBad.gif)>>). These bad epochs will be automatically ignored by the '''Process1''' and '''Process2''' tabs, and from all further processing. |
Line 282: | Line 233: |
To have a glance of the signals after the pre-processing, plot the MEG signal for one sensor overlying the left motor-cortex (MRC21) and the EMG signals, both for trial 1. Note that these traces are similar to the ones obtained in the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]. . {{attachment:meg_trial_1.png||width="70%"}} . {{attachment:emg_trial_1.png||width="70%"}} == Coherence 1xN (sensor level) == We will now compute the '''magnitude square coherence (MSC)''' between the '''left EMG''' signal and each of the MEG sensor data. * In the '''''Process1''''' box, drag and drop the '''Left (192 files)''' trial group. Note that the number between square brackets is '''[185]''', as the 7 '''bad''' trials will be ignored by the MSC process. |
==== Comparison with FieldTrip ==== The figures below show the EMG and MRC21 channels (a MEG sensor over the left motor-cortex) from the epoch #1.1, in Brainstorm (left), and from the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]] (right). {{attachment:bst_ft_trial1.png||width="100%"}} == Coherence: EMG x MEG == Let's compute the '''magnitude-squared coherence (MSC)''' between the '''left EMG''' and the '''MEG''' channels. * In the Process1 box, drag and drop the '''Left (192 files)''' trial group. |
Line 294: | Line 244: |
* Run the process '''''Connectivity > Coherence 1xN [2021]''''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' |
* Select the process '''Connectivity > Coherence 1xN [2023]''': * '''Time window''' = Check '''All file''' |
Line 297: | Line 247: |
* Do not check '''Include bad channels''' nor '''Remove evoke response''' * '''Magnitude squared coherence''' * '''Window length for PSD estimation''' = `0.5 s` * '''Overlap for PSD estimation''' = `50%` * '''Highest frequency of interest''' = `80 Hz` * '''Average cross-spectra of input files (one output file)''' * More details on the '''Coherence''' process can be found in the [[Tutorials/Connectivity#Coherence|connectivity tutorial]]. . {{attachment:coh_meg_emgleft.png||width="60%"}} * Double-click on the resulting node '''mscohere(0.6Hz,555win): EMGlft''' to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press `Enter` to plot it in a new figure. This spectrum corresponds to channel '''MRC21''', and shows a large peak at 17.58 Hz. You can also use the frequency slider (under the '''''Time''''' panel) to explore the MSC output more precisely across frequencies. * Right-click on the spectrum and select '''2D Sensor cap''' for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut `Ctrl-T`. The sensor locations can be displayed with a right-click and by selecting '''''Channels > Display sensors'' '''from the contextual menu (shortcut `Ctrl-E)`. |
* '''Sensor types''' = `MEG` * Do not check '''Include bad channels''' * Select '''Magnitude squared coherence''' as '''Connectivity metric''' * Select '''Fourier transform''' as '''Time-freq decomposition''' method * Click in the '''[Edit]''' button to set the parameters of the Fourier transform: * Select '''Matlab's FFT defaults''' as '''Frequency definition''' * '''FT window length''' = `0.5 s` * '''FT window overlap''' = `50%` * '''Highest frequency of interest''' = `80 Hz` * '''Time resolution''' = '''None''' * '''Estimate & save''' = '''across combined files/epochs''' {{{#!wiki note More details on the '''Coherence''' process can be found in the [[Tutorials/Connectivity#Coherence|Connectivity Tutorial]]. }}} * Add the process '''File > Add tag''' with the following parameters: * '''Tag to add''' = `MEG sensors` * Select '''Add to file name''' * Run the pipeline: || {{attachment:coh_meg_emgleft.png}} || || {{attachment:coh_meg_emgleft2.png}} || * Double-click on the resulting data node '''MSCoh-stft: EMGlft (185 files) | MEG sensors''' to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press `Enter` to display the spectrum from the selected sensor in a new window. This spectrum is that of channel '''MRC21''', and shows a prominent peak at 17.58 Hz. Use the frequency slider (under the Time panel) to explore the MSC output across frequencies. * Right-click on the spectrum and select '''2D Sensor cap''' for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut `Ctrl-T`. The sensor locations can be displayed with a right-click and by selecting '''Channels > Display sensors''' from the contextual menu (shortcut `Ctrl-E)`. |
Line 313: | Line 278: |
We can now average magnitude of the MSC across a frequency band of interest (15-20 Hz): * In the '''''Process1''''' box, drag-and-drop the '''mscohere(0.6Hz,555win): EMGlft''' node, and add the process '''''Frequency > Group in time or frequency bands''''' with the parameters: * Select '''Group by frequency''' |
* We can now average the magnitude of the MSC across the beta band (15-20 Hz). <<BR>>In the Process1 box, select the new '''mscohere''' file. * Run process '''Frequency > Group in time or frequency bands''': * Select '''Group by frequency bands''' |
Line 321: | Line 285: |
The resulting file '''mscohere(0.6Hz,555win): EMGlft | tfbands''' has only one MSC value for each sensor (the MSC average in the 15-20 Hz band). You may visualize the topography of this MSC statistics via 3 possible representations: '''2D Sensor cap''', '''2D Sensor cap''' and '''2D Disk''', which are all accessible via a right-click over the MSC node. We clicked on sensor '''MRC21''' below; it is shown in red. . {{attachment:res_coh_meg_emgleft1520.png||width="100%"}} We can observe higher MSC values between the EMG signal and MEG sensor signals over the contralateral set of central sensors in the beta band. Unfortunately, [[Tutorials/Connectivity#Sensor-level|sensor-level connectivity presents the disadvantages]] of not being interpretable, and being subject to spurious results due to volume conduction. In the next section we will compute coherence in the source level. To do this, we first need to estimate the sources time series from the sensor data. == MEG source modelling == We will perform source modelling using a [[Tutorials/HeadModel#Dipole_fitting_vs_distributed_models|distributed model]] approach for two possible source spaces: the '''cortex surface''' and the entire '''MRI volume'''. In the '''surface space''', a source grid is made with the source located on the cortical surface obtained from the participant's MRI. For the '''volume space''', the source grid consists of elementary sources uniformly distributed across the entire brain volume. Before estimating the brain sources, we need to derive the sensor '''noise covariance matrix''', and the '''head model'''. === Noise covariance === The [[Tutorials/NoiseCovariance#The_case_of_MEG|recommendation for MEG]], is to extract basic noise statistics from empty-room recordings. However, when recommended empty-room recordings are not available, as with this tutorial data, resting-state data can be used as proxies for MEG noise covariance. See the [[Tutorials/NoiseCovariance|noise covariance tutorial]] for more details. * In the raw '''SubjectCMC_clean_notch_high_abs '''node, right-click over '''Raw | clean | notch(...'''and select '''''Noise covariance > Compute from recordings'''''. Please enter the following parameters: * '''Baseline:''' from `18 to 30 s` * Select the '''Block by block''' option. . {{attachment:pro_noise_cov.png||width="60%"}} * Copy the '''Noise covariance''' (<<Icon(iconNoiseCov.gif)>>) node to the '''SubjectCMC_preprocessed''' folder. This can be done using the shortcuts `Ctrl-C` and `Ctrl-V`. . {{attachment:tre_covmat.png||width="50%"}} === Head model === The [[Tutorials/HeadModel|head model]], aka forward model, accounts for how neural electrical currents (in a source space) produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements. As the head model depends on the source space, a distinct head model is required for the surface and volume source spaces. Please refer to the [[Tutorials/HeadModel|head model tutorial]] for more in-depth explanations. ==== Surface ==== * Go to the '''Anatomy''' view of the database and verify that the '''pial_15002V''' surface is the default (green characters) cortex surface. Otherwise, right-click on it and select '''Set as default cortex''' in the contextual menu. * Go back '''Functional data''' view of the database, and inside the '''SubjectCMC_preprocessed''' folder, right-click over '''CTF channels (191)''' and select '''Compute head model '''from the contextual menu. Run the process with the options as indicated below: |
* The resulting '''MSCoh-stft:...|tfbands''' node contains one MSC value for each sensor (the MSC average in the 15-20 Hz band). Right-click on the file to display the 2D or 3D topography of the MSC beta-band measure. <<BR>><<BR>> {{attachment:res_coh_tfgroup.gif}} * Higher MSC values the EMG signal and MEG sensor signals map over the contralateral set of central sensors in the beta band. [[Tutorials/Connectivity#Sensor-level|Sensor-level connectivity]] can be ambiguous to interpret anaotmically though. We will now map the magnitude of EMG-coherence across the brain (MEG sources). == Source estimation == === MRI segmentation === We first need to extract the cortical surface from the T1 MRI volume we imported at the beginning of this tutorial. [[https://neuroimage.usc.edu/brainstorm/Tutorials/SegCAT12|CAT12]] is a Brainstorm pluing that will perform this task in 30-60min. * Switch back to the Anatomy view of the protocol (<<Icon(iconSubjectDB.gif)>>). * Right-click on the MRI (<<Icon(iconMri.gif)>>) > '''MRI segmentation > CAT12''': * '''Number of vertices'''{{{: }}}`15000` * '''Anatomical parcellations''': `Yes` * '''Cortical maps''': {{{No}}} . {{attachment:cat12.png||width="100%"}} * Keep the low-resolution central surface selected as the default cortex ('''central_15002V'''). This surface is the primary output of CAT12, and is shown half-way between the pial envelope and the grey-white interface ([[https://neuroimage.usc.edu/brainstorm/Tutorials/SegCAT12|more information]]). The head surface was recomputed during the process and duplicates the previous surface obtained above: you can either delete one of the head surfaces or ignore this point for now. * For quality control, double-click on the head and central_15002V surfaces to visualize them in 3D.<<BR>><<BR>> {{attachment:cat12_files.gif}} === Head models === We will perform source modeling using a [[Tutorials/HeadModel#Dipole_fitting_vs_distributed_models|distributed model]] approach for two different source spaces: the '''cortex surface''' and the entire '''MRI volume'''. Forward models are called ''head models'' in Brainstorm. They account for how neural electrical currents produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements ([[http://neuroimage.usc.edu/brainstorm/Tutorials/HeadModel|more information]]). A distinct head model is required for the cortex surface and head volume source spaces. ==== Cortical surface ==== * Go back to the '''Functional data''' view of the database. * Right-click on the channel file of the imported epoch folder > '''Compute head model'''. |
Line 355: | Line 313: |
The cortical head model will be derived from each of the 15,000 sources (surface vertices) as defined in the default cortex. . {{attachment:pro_head_model_srf.png||width="40%"}} The '''Overlapping spheres (surface)''' head model (<<Icon(iconHeadModel.gif)>>) now appears in the database explorer. ==== Volume ==== * In the '''SubjectCMC_preprocessed''' folder, right-click over the '''CTF channels (191)''' node and select '''''Compute head model'''''. Set the option values to: |
. {{attachment:pro_head_model_srf.gif}} ==== Whole-head volume ==== * Right-click on the channel file again > '''Compute head model'''. |
Line 366: | Line 320: |
. {{attachment:pro_head_model_vol.png||width="40%"}} * In the '''Volume source grid''' window, specify the following parameters that will produce around '''11,500''' source grid points across the brain volume. |
|
Line 373: | Line 323: |
. {{attachment:pro_grid_vol.png||width="60%"}} The '''Overlapping spheres (volume)''' head model is now added to the database explorer. The green color indicates this is the default head model for the current folder (this can be changed by simply double clicking over the head model nodes.) . {{attachment:tre_head_models.png||width="50%"}} == Source estimation == Now that the '''noise covariance''' and '''head model(s)''' are available, we will perform [[Tutorials/SourceEstimation|source estimation]], to find the sources that gave origin to the signals registered in the sensors. From the diverse [[Tutorials/SourceEstimation#Method|source estimation methods available in Brainstorm]], in this tutorial the '''minimum-norm imaging''' method is used. The minimum-norm method estimates the linear combination of the current at each point in the source grid that explains the recorded sensor signals favouring minimum energy (L2-norm) solutions. As result, a large matrix called the '''imaging kernel''' is obtained. By multiplying the imaging kernel with the sensor data, it is possible to obtain the estimates of brain sources time series. A different imaging kernel is derived for each of the head models we have produced above: '''surface''' and '''volume'''. See the [[Tutorials/SourceEstimation|source estimation tutorial]] for more details. {{{#!wiki note '''Each dipole in the source grid may point arbitrarily in any direction in a 3D space.''' <<BR>><<BR>> '''Only for surface grids''', the dipole orientation can be fixed to be normal to the cortical surface, this approach is based on anatomical observations of the brain cortex. The result is then in a smaller model that is faster to compute and display. <<BR>> A discussion on '''constrained''' vs '''unconstrained''' sources is presented [[Tutorials/SourceEstimation#Why_does_it_look_so_noisy.3F|here]]. }}} === Surface === Here we will estimate the sources in the surface space for '''constrained''' (normal to the cortex) and '''unconstrained''' dipole orientations. * Right-click on the '''Overlapping spheres (surface)''' head model and select '''Compute sources [2018]. '''Enter the following parameters: |
. {{attachment:pro_head_model_vol.gif}} * The '''Overlapping spheres (volume)''' head model is now added to the database explorer. The green color of the name indicates this is the default head model for the current folder: you can decide to use another head model available by double clicking on its name. . {{attachment:tre_head_models.gif}} === Noise covariance === The [[Tutorials/NoiseCovariance#The_case_of_MEG|recommendation for MEG]] source imaging is to extract basic noise statistics from empty-room recordings. When not available, as here, resting-state data can be used as proxies for MEG noise covariance. We will use a segment of the MEG recordings, away from the task and major artifacts: '''18s-29s'''. * Right-click on the clean continuous file > '''Noise covariance > Compute from recordings'''.<<BR>><<BR>> {{attachment:pro_noise_cov.gif}} * Right-click on the Noise covariance (<<Icon(iconNoiseCov.gif)>>) > '''Copy to other folders'''.<<BR>><<BR>> {{attachment:tre_covmat.gif}} === Inverse models === We will now compute three inverse models, with different source spaces: cortex surface with '''constrained''' dipole orientations (normal to the cortex), cortex surface with '''unconstrained''' orientation, and MRI '''volume''' ([[Tutorials/SourceEstimation|more information]]). ==== Cortical surface ==== * Right-click on '''Overlapping spheres (surface)''' > '''Compute sources''': |
Line 400: | Line 351: |
The inversion kernels (<<Icon(iconResultKernel.gif)>>) '''MN: MEG (surface)(Constr) 2018''' and '''MN: MEG (surface)(Unconstr) 2018''' are now available in the database explorer. . {{attachment:tre_sources_srf.png||width="40%"}} ==== Volume ==== To compute the imaging kernel for the volume source space: * Right-click on the '''Overlapping spheres (volume)''' head model and select '''Compute sources [2018], '''with the following parameters: * '''Minimum norm imaging''' |
==== Whole-head volume ==== * Right-click on the '''Overlapping spheres (volume)''' > '''Compute sources:''' |
Line 415: | Line 359: |
At this point the imaging kernel (<<Icon(iconResultKernel.gif)>>) '''MN: MEG (volume)(Unconstr) 2018''' is now also available in the database explorer. . {{attachment:tre_sources_vol.png||width="40%"}} ----- Note that each trial is associated with '''three''' source link (<<Icon(iconResultLink.gif)>>) nodes, that correspond to each of the imaging kernels obtained above. . {{attachment:gui_inverse_kernel.png||width="60%"}} {{{#!wiki tip The '''source link''' nodes are not files containing the sources time series, instead, the links indicate Brainstorm to: load the corresponding MEG recordings, load the respective inverse kernel, and multiply the two on the fly to generate the sources time series. This approach saves space on the hard drive. }}} == Coherence 1xN (source level) == Once we have computed the time series for the sources, it is time to compute coherence between the EMG signal and brain sources obtained with each of the imaging kernels. Let's start with sources from the '''MN: MEG (surface)(Constr)''' kernel: * To select the source maps we want to include in the coherence estimation, click on the [[Tutorials/PipelineEditor#Search_Database|Search Database]] button (<<Icon(iconZoom.gif)>>), and select '''''New search'''''. * Set the search parameters as shown below, and click on '''Search'''. |
* Three imaging kernels (<<Icon(iconResultKernel.gif)>>) are now available in the database explorer. Note that each trial is associated with three source links (<<Icon(iconResultLink.gif)>>). The concept of imaging kernels is explained in Brainstorm's main tutorial on source mapping. . {{attachment:tre_sources.gif}} == Coherence: EMG x Sources == We can now compute the coherence between the EMG signal and all brain source time series, for each of the source models considered. Let's start with the '''cortical surface/constrained''' model. * To select the source maps to include in the coherence estimation, click on the [[Tutorials/PipelineEditor#Search_Database|Search Database]] button (<<Icon(iconZoom.gif)>>), and select '''New search'''. Set the parameters as shown below, and click on '''Search'''. |
Line 438: | Line 370: |
This will create a new tab in the database explorer. This new tab contains '''only''' the files that match the search criteria. . {{attachment:tre_search_srf.png||width="40%"}} * Click the '''Process2''' tab at the bottom of the main Brainstorm window and drag-and-drop the '''Left (192 files)''' trial group into the '''Files A''' box and repeat for the '''Files B''' box. Select '''Process recordings''' (<<Icon(iconEegList.gif)>>) for Files A, and '''Process sources''' (<<Icon(iconResultList.gif)>>) for Files B. The logic is that we will extract from the same files the EMG signal (Files A side) and the sources time series (Files B side), and then compute coherence between these two sets. Note that blue labels over the '''Files A''' and the '''Files B''' boxes indicate that there are 185 "good trial" files per box. |
* This creates a new tab in the database explorer, showing only the files that match the search criteria. . {{attachment:tre_search_srf.gif}} * Click the '''Process2''' tab at the bottom of the main Brainstorm window. * '''Files A''': Drag-and-drop the '''Left (192 files)''' group, select '''Process recordings''' (<<Icon(iconEegList.gif)>>). * '''Files B''': Drag-and-drop the '''Left (192 files)''' group, select '''Process sources''' (<<Icon(iconResultList.gif)>>). * Objective: Extract from the same files both the EMG recordings (Files A) and the sources time series (Files B), then compute the coherence measure between these two categories of time series. Note that the blue labels over the file lists indicate that there are 185 "good" files (7 bad epochs). |
Line 446: | Line 383: |
Open the '''Pipeline editor''': * Add the process '''Connectivity > Coherence AxB [2021]''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' |
* Select the process '''Connectivity > Coherence AxB [2023]''': * '''Time window''' = '''All file''' |
Line 452: | Line 387: |
* Do not '''Remove evoked responses from each trial''' * '''Magnitude squared coherence''', '''Window length''' = `0.5 s` * '''Overlap''' = `50%` * '''Highest frequency''' = `80 Hz` * '''Average cross-spectra'''. * Add the process '''File > Add tag''' with the following parameters: |
* Check '''Flatten unconstrained source orientations''' * Click in the '''[Edit]''' button of the '''PCA options''': * Select '''across all epochs/files''' as PCA method * '''PCA time window''' = '''All file''' * Check '''Remove DC offset''' * '''Baseline''' = '''All file''' * Ignore the '''Scout function''' options as we are not using scouts * Select '''Magnitude squared coherence''' as '''Connectivity metric''' * Select '''Fourier transform''' as '''Time-freq decomposition''' method * Click in the '''[Edit]''' button to set the parameters of the Fourier transform: * Select '''Matlab's FFT defaults''' as '''Frequency definition''' * '''FT window length''' = `0.5 s` * '''FT window overlap''' = `50%` * '''Highest frequency of interest''' = `80 Hz` * '''Time resolution''' = '''None''' * '''Estimate & save''' = '''across combined files/epochs''' * Add the process '''File > Add tag''': |
Line 466: | Line 412: |
* Once the processing is finish. Go to the '''Database''' tab of the database explorer and refresh it (with '''[F5]''') to show the resulting 1xN connectivity file (<<Icon(iconConnect1.gif)>>) '''mscohere(0.6Hz,555win): Left (#1) | (surface)(Constr)'''. * Repeat the steps above to compute the EMG-sources coherence for the sources from the kernels '''MN: MEG (surface)(Unconstr)''' and '''MN: MEG (volume)(Unconstr)'''. Do not forget to update the '''search criteria''' and the '''tag''' to be added to the result. === Results (Surface) === Double-click the 1xN connectivity files for the '''(surface)''' source space to show the results on the cortex. If you are not familiar with the options in the cortex figures, check [[Tutorials/SourceEstimation#Display:_Cortex_surface|Display: Cortex surface]] Find the location and frequency with the highest coherence value. * Adjust the '''amplitude threshold''' to '''99%''' * Explore with coherence spectra with '''frequency slider''' The highest coherence value is located on the '''right primary motor cortex''' (precentral gyrus) at 14.65 Hz for the analysis using constrained and unconstrained sources. Set the '''amplitude threshold''' to '''0%''' to see the extension of the high coherence values. || {{attachment:res_coh_srfc.png}} || || {{attachment:res_coh_srfu.png}} || ||<style="text-align:center">MSC @ 14.65Hz (surface)(Constr) || ||<style="text-align:center">MSC @ 14.65Hz (surface)(Unconstr) || We observe that results obtained with constrained and unconstrained sources agree in the location and frequency of the peak coherence. The main difference between these results is that coherence values obtained with unconstrained sources appear smoother, this caused by the maximum aggregation performed across directions (explained at detail in the next section. Finally, right-click on any of the cortex figures and select '''''Get coordinates'''''. Then click on the right motor cortex with the crosshair cursor that appears. The [[CoordinateSystems|SCS]] coordinates will be useful in the next section. <<BR>> SCS coordinates '''X:38.6''', '''Y:-21.3''' and '''Z: 115.5''' {{attachment:res_get_coordinates.png||width="80%"}} === Results (Volume) === Double-click the 1xN connectivity file for the '''(volume)''' source space. Note that this time the results are shown in the '''MRI viewer''' rather than the cortical surface. * Set '''frequency slider''' to 14.65 Hz * Set the SCS coordinates to '''X:38.6''', '''Y:-21.3''' and '''Z: 115.5''' * Set the '''Transparency''' of the coherence values (Data) in the '''''Surface''''' tab to '''30%''' ||<style="text-align:center"> {{attachment:res_coh_vol.png}} || ||<style="text-align:center">MSC @ 14.65Hz (volume)(Unconstr) || We note that all the results obtained with constrained (surface) and unconstrained (surface and volume) sources agree with each other, and in the location and frequency of the peak coherence. Moreover, they agree with our hypothesis, previous results in the literature [REFS], and the results presented in the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]. === Coherence with constrained and unconstrained sources === For '''constrained''' sources, each vertex in the source grid is associated with '''ONE''' time series, as such, when coherence is computed with the EMG signal (also one time series), the result is '''ONE''' coherence spectrum per vertex. In other words, for each frequency bin, there is coherence brain map. FIGURE (Diagram needed) In the case of '''unconstrained''' sources, each vertex in the grid is associated with '''THREE''' time series, each one corresponding to the X, Y and Z directions. Thus, when coherence is computed with the EMG signal (one time series), there are '''THREE''' coherence spectra. This complicates its representation in the brain, thus; these '''THREE''' coherence spectra need to be '''flattened''' into one, resulting in one coherence spectrum per vertex, the maximum across directions is found for each frequency bin for each vertex. FIGURE (Diagram needed) {{{#!wiki caution An alternative approach in the literature, to address the 3-dimensional nature of the unconstrained sources, consists in flattening the vertex X, Y and Z time series before the coherence computation; resulting in a similar case as the constrained sources (REF). Common methods for this flattening include: '''PCA''' (only first component is kept), and [[https://en.wikipedia.org/wiki/Norm_(mathematics)|(Euclidean) norm]]. This flattening of the time series can be performed in Brainstorm with the process: Sources > '''Unconstrained to flat map'''. * Flattened sources are saved as full rather than recordings+kernel. * We have tested this flattening approach with simulations and the real data (from this tutorial) and we have found decremental effects on the expected results. |
* Repeat the steps above to compute the EMG-sources coherence for the other source models: '''surface/unconstrained''' and '''volume''': <<BR>> * Edit the search criteria accordingly: Right-click on the search tab > '''Edit search'''. * This updates automatically the file selection in the Process2 tab. * Select the processes as indicated above and '''update''' the '''file tag''' according to the type of sources. {{{#!wiki caution The option '''Flatten unconstrained source orientations''' did not have any impact with '''constrained''' sources. However when used with '''unconstrained''' sources, these these sources are flattened before computing coherence. If the option is unchecked for '''unconstrained''' sources, coherence is computed between the sensor and each of the the three orientations per source, these resulting three coherence spectra are then collapsed into one using the maximum value for each frequency band. Find mode details in the [[https://neuroimage.usc.edu/brainstorm/Tutorials/Connectivity#Scout-level_connectivity|connectivity tutorial]] |
Line 520: | Line 421: |
== Coherence 1xN (scout level) == So far, we have computed coherence at the source level, thus, a coherence spectrum is computed for each of the 15002 source points. This large dimension hinders later analysis of the results. Therefore, the strategy is to reduce the dimensionality of the source space by using a [[Tutorials/Scouts#Scout_toolbar_and_menus|surface-]] or [[Tutorials/DefaultAnatomy#MNI_parcellations|volume-]]parcellation scheme, in Brainstorm jargon this is an '''atlas''' that is made of '''scouts'''. See the [[Tutorials/Scouts|scout tutorial]] for detail information on atlases and scouts in Brainstorm. Under this approach, instead of providing one result (coherence spectrum) per source vertex, one result is computed for each scout. When computing coherence (or other connectivity metrics) in the scout level, it is necessary to provide two parameters that define how the data is aggregated per scout: * The '''scout function''' (mean is often used), and * When the within-scout aggregation takes place. Either '''before''' or '''after''' the coherence estimation. * '''Before''': The scout function is applied for each direction on the vertices' source time series that make up a scout; resulting in one time series per direction per scout. Then, the scout time series are used to compute coherence with the reference signal (EMG in this tutorial), and the coherence spectra for each scout are aggregated across dimensions, [[#Coherence_with_constrained_and_unconstrained_sources|as shown previously]], to obtain one coherence spectrum per scout. . FIGURE (Diagram needed) * '''After''': Coherence is computed between the reference signal and each direction of the vertices' source time series, [[#Coherence_1xN_.28source_level.29|as in the previous section]]. Then, the scout function is applied on the coherence spectra for each direction of the vertices within a scout, finally these spectra are aggregated across dimensions to obtain a coherence spectrum per scout. . FIGURE (Diagram needed) {{{#!wiki tip As it can be seen, the '''After''' option takes longer and used more resources as it computes the coherence spectrum for each vertex in the scouts, and then, the coherence spectra are aggregated. |
* Close the search tab. If the 3 new connectivity files <<Icon(iconConnect1.gif)>>) are not featured in the database explorer: refresh the dabase display by pressing '''[F5]''' or by clicking again on the selected button "Functional data". <<BR>><<BR>> {{attachment:tre_coh_src_files.gif}} === Surface === Double-click on the 1xN connectivity files for the two '''(surface)''' source spaces (constrained and unconstrained source orientations) to visualize the cortical maps. See main tutorial [[Tutorials/SourceEstimation#Display:_Cortex_surface|Display: Cortex surface]] for all available options. Pick the cortical location and frequency with the highest coherence value. * In the '''Surface''' tab: Smooth=30%, Amplitude=0%. * To compare visually different cortex maps, set manually the [[Tutorials/Colormaps|colormap]] range (e.g.`[0 - 0.07])` * Explore with coherence spectra with the '''frequency slider''' * The highest coherence value is located at '''14.65 Hz''', in the '''right primary motor cortex''' (precentral gyrus). To observe the coherence spectrum at a given location: right-click on the cortex > '''Source: Power spectrum'''. <<BR>><<BR>> {{attachment:res_coh_surf.gif}} * The constrained (top) and unconstrained (bottom) orientations coherence maps are qualitatively similar in terms of peak location and frequency. The unconstrained source map appears smoother because of the maximum aggregation across all three current directions at each brain location, as explained below. These maps results are similar to those obtained with the [[https://www.fieldtriptoolbox.org/tutorial/coherence/|FieldTrip tutorial]]. * To obtain the 3D coordinates of the peak coherence location: right-click on the figure > '''Get coordinates'''. Then click on the right motor cortex with the crosshair cursor. Let's keep note of these coordinates for later comparision with the whole-head volume results below. <<BR>><<BR>> {{attachment:res_get_coordinates.gif}} === Volume === * Double-click the 1xN connectivity file to display the '''(volume)''' source space. * Set the frequency to 14.65 Hz, and the data transparency to 20% (Surface tab). * Find the peak in the head volume by navigating in all 3 dimensions, or use the coordinates from the surface results as explained above. * Right-click on the figure > Anatomical atlas > None. The coherence value under the cursor is then shown on the top-right corner of the figure. <<BR>><<BR>> {{attachment:res_coh_vol.gif}} === Method for constrained sources === For cortical and ''' orientation-constrained''' sources, each vertex in the source grid is associated with '''1''' time series. As such, when coherence is computed with the EMG signal (also consisting of one time series), this result is '''1''' coherence spectrum per vertex. In other words, for each frequency bin, we obtain one coherence brain map. [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_constr.png|{{attachment:diagram_1xn_coh_constr.png|width="100%"}}]] === Method for unconstrained sources === In the case of '''orientation-unconstrained''' (either surface or volume) sources, each vertex (source location) is associated with '''3''' time series, each one corresponding to the X, Y and Z directions. As such, the approach taken in this tutorial consisted in first [[https://neuroimage.usc.edu/brainstorm/Tutorials/PCA#Unconstrained_source_flattening_with_PCA|reducing the 3D maps to a single orientation with the PCA method]], this results in a source model similar to that with orientation-constrained sources, thus, coherence is then computed as with ''' orientation-constrained''' (described above). [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_flattened.png|{{attachment:diagram_1xn_coh_flattened.png||width="100%"}}]] {{{#!wiki caution An alternative approach for addressing the issue of the 3-dimensional nature of coherence with '''unconstrained sources''' is to compute coherence between the EMG signal (one time series), the each of the '''3''' time series (one for each source orientation: x, y and z) associated with each source location, resulting in '''3''' coherence spectra. Then, these coherence spectra are collapsed into a single coherence spectrum by keeping the maximum coherence value across those in the three directions, for each frequency bin. The choice of the maximum statistic is empirical: it is not rotation invariant (if the specifications of the NAS/LPA/RPA fiducials change, the x,y,z axes also change, which may influence connectivity measures quantitatively). Nevertheless, our empirical tests showed this option produced smooth and robust (to noise) connectivity estimates. [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_unconstr.png|{{attachment:diagram_1xn_coh_unconstr.png||width="100%"}}]] Below the comparison of the coherence between the EMG sensor and '''unconstrained surface''' sources using '''PCA flattening in the source space''' (left) and '''"Flattening of the coherence, by max across dimensions" ''' [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_unconstr.png|{{attachment:diagram_1xn_coh_unconstr.png||width="100%"}}]] |
Line 538: | Line 464: |
<<BR>> Let's here compute the coherence using scouts, using '''mean''' as scout function alongside with the '''Before''' option. We will use the [[https://www.biorxiv.org/content/biorxiv/early/2017/07/16/135632.full.pdf|Schaefer 100 parcellation]] atlas on the results from constrained sources. |
== Coherence: EMG x Scouts == We have now obtained coherence spectra at each of the 15002 brain source locations. This is a very large amount of data to shift through. We therefore recommend to further reduce the dimensionality of the source space by using a [[Tutorials/Scouts#Scout_toolbar_and_menus|surface-]] or [[Tutorials/DefaultAnatomy#MNI_parcellations|volume-]]parcellation scheme. In Brainstorm vernacular, this can be achieved via an '''atlas''' consisting of '''scout regions'''. See the [[Tutorials/Scouts|Scout Tutorial]] for detailed information about atlases and scouts in Brainstorm. To compute scout-wise coherence, it is necessary to specify two parameters to indicate how the data is aggregated in each scout: 1. The [[https://neuroimage.usc.edu/brainstorm/Tutorials/Scouts#Scout_function|scout function]] (for instance, the mean), and 2. When the scout aggregation procedure if performed, either '''before''' or '''after''' the coherence computation). === Method for constrained sources === * Scout function applied '''before''': The scout function (e.g., mean) is applied for all the source time series in the scout; resulting in '''1''' time series per scout. Thi scout time series is then used to compute coherence with the reference signal (here, with EMG) resulting in '''1''' coherence spectrum per scout. [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_sct_bef.png|{{attachment:diagram_1xn_coh_sct_bef.png||width="100%"}}]] * Scout function applied '''after''': With this option, coherence is computed between the reference signal (here, EMG) and each source time series in the scout, resulting in '''N''' coherence spectra, with '''N''' being the number of sources in the scout. Finally the scout function is applied to the resulting coherence spectra for each source within the scout. This is a considerably more greedy option that the "before" option above, as it computes coherence measures between the EMG signal and each of the 15000 elementary source signals. This requires more computing resources; see [[https://neuroimage.usc.edu/brainstorm/Tutorials/Connectivity#Source-level|introduction]] for an example. [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_sct_aft.png|{{attachment:diagram_1xn_coh_sct_aft.png||width="100%"}}]] === Method for unconstrained sources === When flattening is performed on '''unconstrained sources''', coherence computations is carried out as described in the [[XX|previous section]]. However, if the sources are not flattened before computing coherence: * Scout function applied '''before''': The scout function (e.g., mean) is applied for each direction on the elementary source time series in the scout; resulting in '''3''' time series per scout, one time series per orientation (x, y, z). These scout time series are then used to compute coherence with the reference signal (here, with EMG), and the coherence spectra for each scout are aggregated across the x, y and z dimensions, with the '''max''' function [[#Method|as shown previously]], to obtain '''1''' coherence spectrum per scout. [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_sct_bef.png|{{attachment:diagram_1xn_coh_sct_bef.png||width="100%"}}]] * Scout function applied '''after''': With this option, coherence is computed between the reference signal (here, EMG) and each elementary sources (number of sources in each scout x 3 orientations). Then the coherence spectra for each elementary source are aggregated across the x, y and z dimensions. Finally the scout function is applied to the resulting coherence spectra for each elementary source within the scout. This is a considerably more greedy option that the "before" option above, as it computes coherence measures between the EMG signal and each of the 45000 elementary source signals. This requires more computing resources; see [[https://neuroimage.usc.edu/brainstorm/Tutorials/Connectivity#Source-level|introduction]] for an example. [[https://neuroimage.usc.edu/brainstorm/Tutorials/CorticomuscularCoherence?action=AttachFile&do=get&target=diagram_1xn_coh_sct_aft.png|{{attachment:diagram_1xn_coh_sct_aft.png||width="100%"}}]] Here, we will compute the coherence from scouts, using '''mean''' as scout function with the '''before''' option. We will use the [[https://www.biorxiv.org/content/biorxiv/early/2017/07/16/135632.full.pdf|Schaefer 100 parcellation]] atlas on to the orientation-constrained source map. |
Line 544: | Line 497: |
* On the '''Process2''' tab drag-and-drop the '''Left (192 files)''' trial group into the '''Files A''' and '''Files B''' boxes. Select '''Process recordings''' (<<Icon(iconEegList.gif)>>) for Files A, and '''Process sources''' (<<Icon(iconResultList.gif)>>) for Files B. There should be '''185''' files in each side. | * In the Process2: '''Left '''trial group into both the '''Files A''' and '''Files B''' boxes. Select '''Process recordings''' (<<Icon(iconEegList.gif)>>) for Files A, and '''Process sources''' (<<Icon(iconResultList.gif)>>) for Files B. |
Line 549: | Line 502: |
* Add the process '''Connectivity > Coherence AxB [2021]''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' |
* Add the process '''Connectivity > Coherence AxB [2023]''' with the following parameters: * '''Time window''' = '''All file''' |
Line 555: | Line 508: |
* '''Scout function''': `Mean` * '''When to apply the scout function''': `Before` * Do not '''Remove evoked responses from each trial''' * '''Magnitude squared coherence''', '''Window length''' = `0.5 s` * '''Overlap''' = `50%` * '''Highest frequency''' = `80 Hz` * '''Average cross-spectra'''. |
* '''Scout function: Apply''': `before` * '''Scout function''': `mean` * Select '''Magnitude squared coherence''' as '''Connectivity metric''' * Select '''Fourier transform''' as '''Time-freq decomposition''' method * Click in the '''[Edit]''' button to set the parameters of the Fourier transform: * Select '''Matlab's FFT defaults''' as '''Frequency definition''' * '''FT window length''' = `0.5 s` * '''FT window overlap''' = `50%` * '''Highest frequency of interest''' = `80 Hz` * '''Time resolution''' = '''None''' * '''Estimate & save''' = '''across combined files/epochs''' |
Line 572: | Line 529: |
Double-click on the resulting file. This time the coherence spectra are not displayed on the cortex, but they are plotted for each scout is show. Right click, and it can be also shown as image. | Double-click on the new file: the coherence spectra are not displayed on the cortex; they are plotted for each scout. The 1xN connectivity file can also be shown as an image as displayed below: |
Line 576: | Line 533: |
Note that for 14.65 Hz, the higher peak corresponds to the '''SomMotA_2 R''' scout located over the right primary motor cortex. |
Note that at 14.65 Hz, the two highest peaks correspond to the '''SomMotA_4 R''' and '''SomMotA_2 R''' scouts, both located over the right primary motor cortex. |
Line 581: | Line 536: |
The choice of the optimal parcellation scheme for the source space is not easy. <<BR>> The optimal choice is to choose a parcellation based on anatomy, for example the Brodmann parcellation. <<BR>> In Brainstorm these atlases are imported in Brainstorm as scouts (cortical regions of interest), and saved directly in the surface files as explained in this tutorial here. |
The brain parcellation scheme is the user's decision. We recommend the Schaefer100 atlas used here by default. |
Line 585: | Line 538: |
{{{#!wiki caution BRAINSTORM TEAM <<BR>> Due to the current implementation of the bst_connectivity, the full source map for each trial (185) are loaded in memory, thus replicate the After, only for the (surface)(Constr) option ~30GB of RAM are needed! (Unconstrained take 3 times that). }}} == Coherence NxN (scout level) == In previous sections, we have computed coherence between a reference signal (EMG) and sources (or scouts). However, depending on the experimental setup and hypotheses, we may want to study the brain as network, this is to say, to compute the NxN connectivity. Due to the large number of sources (often several thousands) while in theory it is possible to compute NxN connectivity, it is not practical, as such a common approach is to use of scouts to reflect the functional connections among its cell assemblies, resulting in a connectome. {{{#!wiki note Diverse studies have shown some overlap between connectomes derived from electrophysiological signals (MEG/EEG) and the ones derived from fMRI, which is reasonably expected as both are the result of the undergoing biological system. However, due to its nature, the '''electrophysiological connectomes provide unique insights on how functional communication is implemented in the brain''' ([[https://doi.org/10.1016/j.neuroimage.2021.118788|Sadaghiani et al., 2022]]). }}} Similar to the [[#Coherence_1xN_.28scout_level.29|coherence 1xN with scouts]], we need to define the '''scout function''' and '''when''' it is going to be applied. Note that computing coherence between two unconstrained leads to 9 (3x3) coherence spectra. FIGURE (Diagram needed) FIGURE (Diagram needed) Let's here compute the NxN coherence, using '''mean''' as scout function alongside with the '''Before''' option, and the '''Schaefer 100''' parcellation atlas. All of this for the constrained sources results. * Use Search Database (<<Icon(iconZoom.gif)>>) to select all the '''(surface)(Constr)''' source maps . {{attachment:gui_search_srf_n.png||width="70%"}} * On the '''Process1''' tab drag-and-drop the '''Left (192 files)''' trial group into '''Files to process''', and select the '''Process sources''' (<<Icon(iconResultList.gif)>>). There should be '''185''' files. . {{attachment:process1_n.png||width="80%"}} Open the '''Pipeline editor''': * Add the process '''Connectivity > Coherence NxN [2021]''' with the following parameters: * '''Time window''' = `0 - 1000 ms` or check '''All file''' * Check '''Use scouts (B)''' * From the menu at the right, select '''Schaefer_100_17net''' * Select all the scouts * '''Scout function''': `Mean` * '''When to apply the scout function''': `Before` * Do not '''Remove evoked responses from each trial''' * '''Magnitude squared coherence''', '''Window length''' = `0.5 s` * '''Overlap''' = `50%` * '''Highest frequency''' = `80 Hz` * '''Average cross-spectra'''. * Run the pipeline {{attachment:pro_coh_n_srfc_bef_sct.png||width="60%"}} The result is a NxN connectivity file (<<Icon(iconConnectN.gif)>>), It contains 10404 (102x102) coherence spectra. Such a visualization is not practical, thus the connectivity graph or the adjacent matrix are displayed for each frequency bin in the coherence spectra. For more details, see the [[Tutorials/ConnectivityGraph|connectivity graph tutorial]]. * Right-click on the NxN connectivity file to see its display options. * Select both, the connectivity '''graph''', and the '''image''' (adjacency matrix). * Set the frequency slider to 14.65 Hz. {{attachment:res_coh_n_srfc_bef_sct.png}} {{attachment:res_coh_n_srfc_bef_sct2.png}} |
|
Line 641: | Line 540: |
== Scripting == | |
Line 659: | Line 557: |
==== Forum discussions ==== {{{#!wiki important [TODO] Find relevant Forum posts. }}} <<HTML(<!-- END-PAGE -->)>> |
== Scripting == The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: [[https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/script/tutorial_coherence.m|brainstorm3/toolbox/script/tutorial_coherence.m]] <<HTML(<div style="border:1px solid black; background-color:#EEEEFF; width:720px; height:500px; overflow:scroll; padding:10px; font-family: Consolas,Menlo,Monaco,Lucida Console,Liberation Mono,DejaVu Sans Mono,Bitstream Vera Sans Mono,Courier New,monospace,sans-serif; font-size: 13px; white-space: pre;">)>><<EmbedContent("https://neuroimage.usc.edu/bst/viewcode.php?f=tutorial_coherence.m")>><<HTML(</div >)>> |
Corticomuscular coherence (CTF MEG)
Authors: Raymundo Cassani, Francois Tadel, Sylvain Baillet.
Corticomuscular coherence measures the degree of similarity between electrophysiological signals (MEG, EEG, ECoG sensor traces or source time series, especially over the contralateral motor cortex) and the EMG signals recorded from muscle activity during voluntary movements. Cortical-muscular signals similarities are conceived as due mainly to the descending communication along corticospinal pathways between primary motor cortex (M1) and the muscles attached to the moving limb(s). For consistency purposes, the present tutorial replicates, with Brainstorm tools, the processing pipeline "Analysis of corticomuscular coherence" of the FieldTrip toolbox.
Contents
Dataset description
The dataset is identical to that of the FieldTrip tutorial: Analysis of corticomuscular coherence:
- One participant,
- MEG recordings: 151-channel CTF MEG system,
- Bipolar EMG recordings: from left and right extensor carpi radialis longus muscles,
- EOG recordings: used for detection and attenuation of ocular artifacts,
- MRI: 1.5T Siemens system,
- Task: The participant lifted their hand and exerted a constant force against a lever for about 10 seconds. The force was monitored by strain gauges on the lever. The participant performed two blocks of 25 trials using either their left or right hand.
- Here we describe relevant Brainstorm tools via the analysis of the left-wrist trials. We encourage the reader to practice further by replicating the same pipeline using right-wrist trials!
Corticomuscular coherence:
Coherence measures the linear relationship between two signals in the frequency domain.
Previous studies (Conway et al., 1995, Kilner et al., 2000) have reported corticomuscular coherence effects in the 15–30 Hz range during maintained voluntary contractions.
- TODO: IMAGE OF EXPERIMENT, SIGNALS and COHERENCE
Download and installation
Pre-requisites
Please make sure you have completed the get-started tutorials, prior to going through the present tutorial.
- Have a working copy of Brainstorm installed on your computer.
Install the CAT12 Brainstorm plugin, to perform MRI segmentation from the Brainstorm dashboard.
Download the dataset
Download SubjectCMC.zip from the FieldTrip download page:
https://download.fieldtriptoolbox.org/tutorial/SubjectCMC.zip- Unzip the downloaded archive file in a folder outside the Brainstorm database or app folders (for instance, directly on your desktop).
Brainstorm
- Launch Brainstorm (via the Matlab command line or the Matlab-free stand-alone version of Brainstorm).
Select from the menu File > Create new protocol. Name the new protocol TutorialCMC and select the options:
No, use individual anatomy,
No, use one channel file per acquisition run.
Importing anatomy
Right-click on the newly created TutorialCMC node > New subject > Subject01.
Keep the default options defined for the study (aka "protocol" in Brainstorm's vernacular).Switch to the Anatomy view of the protocol ().
Right-click on the Subject01 > Import MRI:
Select the file format: All MRI files (subject space)
Select the file: SubjectCMC/SubjectCMC.mri
This step will launch Brainstorm's MRI viewer, where coronal, sagittal and axial cross-sections of the MRI volume are displayed. note that three anatomical fiducials (left and right pre-auricular points (LPA and RPA), and nasion (NAS)) are automatically identified. These fiducials are located near the left/right ears and just above the nose, respectively. Click on Save.
In all typical Brainstorm workflows from this tutorial handbook, we recommend processing the MRI volume at this stage, before importing the functional (MEG/EEG) data. However, we will proceed differently, for consistency with the original FieldTrip pipeline, and readily obtain sensor-level coherence results. We will proceed with MRI segmentation below, before performing source-level analyses.
- We still need to verify the proper geometric registration (alignment) of MRI with MEG. We will therefore now extract the scalp surface from the MRI volume.
Right-click on the MRI () > MRI segmentation > Generate head surface.
Double-click on the newly created surface to display the scalp in 3-D.
MEG and EMG recordings
Link the recordings
Switch now to the Functional data view of your database contents ().
Right-click on Subject01 > Review raw file:
Select the appropriate MEG file format: MEG/EEG: CTF(*.ds; *.meg4; *.res4)
Select the data file: SubjectCMC.ds
A new folder SubjectCMC is created in the Brainstorm database explorer. Note the "RAW" tag over the icon of the folder (), indicating that the MEG files contain unprocessed, continuous data. This folder includes:
CTF channels (191) is a file with all channel information, including channel types (MEG, EMG, etc.), names, 3-D locations, etc. The total number of channels available (MEG, EMG, EOG etc.) is indicated between parentheses.
Link to raw file is a link to the original data file. Brainstorm reads all the relevant metadata from the original dataset and saves them into this symbolic node of the data tree (e.g., sampling rate, number of time samples, event markers). As seen elsewhere in the tutorial handbook, Brainstorm does not create copies by default of (potentially large) unprocessed data files (more information).
MEG-MRI coregistration
This registration step is to align the MEG coordinate system with the participant's anatomy from MRI (more info). Here we will use only the three anatomical landmarks stored in the MRI volume and specific at the moment of MEG data collection. From the FieldTrip tutorial:
"To measure the head position with respect to the sensors, three coils were placed at anatomical landmarks of the head (nasion, left and right ear canal). [...] During the MRI scan, ear molds containing small containers filled with vitamin E marked the same landmarks. This allows us, together with the anatomical landmarks, to align source estimates of the MEG with the MRI."To visually appreciate the correctness of the registration, right-click on the CTF channels node > MRI registration > Check. This opens a 3-D figure showing the inner surface of the MEG helmet (in yellow), the head surface, the fiducial points and the axes of the subject coordinate system (SCS).
Reviewing
Right-click on the Link to raw file > Switch epoched/continuous to convert how the data i stored in the file to a continuous reviewing format, a technical detail proper to CTF file formatting.
Right-click on the Link to raw file > MEG > Display time series (or double-click on its icon). This will display data time series and enable the Time panel and the Record tab in the main Brainstorm window (see specific features in "explore data time series").
Right-click on the Link to raw file > EMG > Display time series.
Event markers
The colored dots at the top of the figure, above the data time series, indicate event markers (or triggers) saved along with MEG data. The actual onsets of the left- and right-wrist trials are not displayed yet: they are saved in an auxiliary channel of the raw data named Stim. To add these markers to the display, follow this procedure:
With the time series figure still open, go to the Record tab and select File > Read events from channel. Event channels = Stim, select Value, Reject events shorter than 12 samples. Click Run.
The rejection of short events is nececessary in this dataset, because transitions between values in the Stim channel may span over several time samples. Otherwise, and for example, an event U1 would be created at 121.76s because the transition from the event U1025 back to zero features two unwanted values at the end of the event. The rejection criteria is set to 12 time samples (10ms), because the duration of all the relevant triggers is longer than 15ms. This value is proper to each dataset, so make sure you verify trigger detections from your own dataset.
New event markers are created and now shown in the Events section of the tab, along with previous event categories. In this tutorial, we will only use events U1 through U25, which correspond to the beginning of each of the 25 left-wrist trials of 10 seconds. Following FieldTrip's tutorial, let's reject trial #7, event U7.
Delete all unused events: Select all the events except U1-U6 and U8-25 (Ctrl+click / Shift+click), then menu Events > Delete group (or press the Delete key).
Merge events: Select all the event groups, then select from the menu Events > Merge group > "Left". A new event category called "Left" now indicate the onsets of 24 trials of left-wrist movements.
Pre-processing
In this tutorial, we will analyze only the Left trials (left-wrist extensions). In the following sections, we will process only the first 330 s of the original recordings, where the left-wrist trials were performed.
Power line artifacts
In the Process1 box: Drag and drop the Link to raw file.
Run process Frequency > Power spectrum density (Welch):
Time window: 0-330 s
Window length: 10 s
Overlap: 50%
Sensor types: MEG, EMG
Double-click on the new PSD file to visualize the power spectrum density of the data.
- The PSD plot shows two groups of sensors: EMG (highlighted in red above) and the MEG spectra below (black lines). Peaks at 50Hz and harmonics (100, 150, 200Hz and above) indicate the European power line frequency and are clearly visible. We will now use notch filters to attenuate power line contaminants at 50, 100 and 150 Hz.
In the Process1 box: Drag and drop the Raw | clean node.
Run the process Pre-processing > Notch filter with:
Check Process the entire file at once
Sensor types: MEG, EMG
Frequencies to remove (Hz): 50, 100, 150
In case you get a memory error message:
These MEG recordings have been saved before applying the CTF 3rd-order gradient compensation, for noise reduction. The compensation weights are therefore applied on the fly when Brainstorm reads data from the file. However, this requires reading all the channels at once. By default, the frequency filter are optimized to process channel data sequentially, which is incompatible with applying the CTF compensation on the fly. This setting can be overridden with the option Process the entire file at once, which requires loading the entire file in memory at once, which may crash teh process depending on your computing resources (typically if your computer's RAM < 8GB). If this happens: run the process Artifacts > Apply SSP & CTF compensation on the file first, then rune the notch filter process without the option "Process the entire file at once" (more information).A new folder named SubjectCMC_clean_notch is created. Obtain the PSD of these data to appreciate the effect of the notch filters. As above, please remember to indicate a Time window restricted from 0 to 330 s in the options of the PSD process.
EMG: Filter and rectify
Two typical pre-processing steps for EMG consist in high-pass filtering and rectifying.
In the Process1 box: drag and drop the Raw | notch(50Hz 100Hz 150Hz) node.
Add the process Pre-process > Band-pass filter
Sensor types = EMG
Lower cutoff frequency = 10 Hz
Upper cutoff frequency = 0 Hz
Add the process Pre-process > Absolute values
Sensor types = EMG
- Run the pipeline
Delete intermediate files that won't be needed anymore: Select folders SubjectCMC_notch and SubjectCMC_notch_high, then press the Delete key (or right-click > File > Delete).
MEG: Blink SSP and bad segments
Stereotypical artifacts such eye blinks and heartbeats can be identified from their respective characteristic spatial distributions. Their contamination of MEG signals can then be attenuated specifically using Signal-Space Projections (SSPs). For more details, consult the specific tutorial sections about the detection and removal of artifacts with SSP. The present tutorial dataset features an EOG channel but no ECG. We will therefore only remove artifacts caused by eye blinks.
Blink correction with SSP
Right-click on the pre-processed file > MEG > Display time series and EOG > Display time series.
In the Record tab: Artifacts > Detect eye blinks, and use the parameters:
Channel name= EOG
Time window = 0 - 330 s
Event name = blink
Three categories of blink events are created. Review the traces of the EOG channels around a few of these events to ascertain they are related to eye blinks. In the present case, we note that the blink group contains genuine eye blinks, and that groups blink2 and blink3 capture saccades.
To remove blink artifacts with SSP, go to Artifacts > SSP: Eye blinks:
Event name=blink
Sensors=MEG
Display the time series and topographies of the first two SSP components identified. In the present case, only the first SSP component can be clearly related to blinks: the percentage of variance explained is substantially higher than the other compoments', the spatial topography of the component is also typical of eye blinks, and the corresponding time series is similar to the EOG signal around blinks. Select only component #1 for removal.
Close all visualization figures by clicking on the large × at the top-right of the main Brainstorm window.
Detection of "bad" data segments
Here we will use the automatic detection of artifacts to identify data segments contaminated by e.g., large eye and head movements, or muscle contractions.
Display the MEG and EOG time series. In the Record tab, select Artifacts > Detect other artifacts and enter the following parameters:
Time window = 0 - 330 s
Sensor types=MEG
Sensitivity=3
Check both frequency bands 1-7 Hz and 40-240 Hz
- You are encouraged to review all the segments marked using this procedure. With the present data, all marked segments do contain clear artifacts.
Select the 1-7Hz and 40-240Hz event groups and select Events > Mark group as bad. Alternatively, you can add the prefix bad_ to the event names. Brainstorm will automatically discard these data segments from further processing.
- Close all visualization windows and reply "Yes" to the save the modifications query.
Epoching
We are now finished with the pre-processing of EMG and MEG recordings. We will now extract and import specific data segments of interest into the Brainstorm database for further derivations. As mentioned previously, we will focus on the Left category of events (left-wrist movements). For consistency with the FieldTrip tutorial, we will analyze 8s of recordings following each movement (from the original 10s around each trial), and split them in 1-s epochs. For each 1-s epoch, the DC offset will be also removed from each MEG channel. And finally, we will uniform the time vector for all 1-s epochs. This last steps is needed as the time vector of each 1-s vector is with respect to the event that was used to import the 8-s trial.
- In the Process1 box: Drag-and-drop the pre-processed file.
Select the process Import > Import recordings > Import MEG/EEG: Events:
Subject name = Subject01
Folder name = empty
Event names = Left
Time window = 0 - 330 s
Epoch time = 0 - 8000 ms
Split recordings in time blocks = 1 s
Uncheck Create a separate folder for each event type
Check Ignore shorter epochs
Check Use CTF compensation
Check Use SSP/ICA projectors
Add the process Pre-process > Remove DC offset:
Baseline = All file
Sensor types = MEG
Check Overwrite input files
Add the process Standardize > Uniform epoch time:
Interpolation method = spline
Check Overwrite input files
- Run the pipeline
|
|
|
A new folder SubjectCMC_notch_high_abs without the 'raw' indication is now created, which includes 192 individual epochs (24 trials x 8 1-s epochs each). The epochs that overlap with a "bad" event are also marked as bad, as shown with an exclamation mark in a red circle (). These bad epochs will be automatically ignored by the Process1 and Process2 tabs, and from all further processing.
Comparison with FieldTrip
The figures below show the EMG and MRC21 channels (a MEG sensor over the left motor-cortex) from the epoch #1.1, in Brainstorm (left), and from the FieldTrip tutorial (right).
Coherence: EMG x MEG
Let's compute the magnitude-squared coherence (MSC) between the left EMG and the MEG channels.
In the Process1 box, drag and drop the Left (192 files) trial group.
Select the process Connectivity > Coherence 1xN [2023]:
Time window = Check All file
Source channel = EMGlft
Sensor types = MEG
Do not check Include bad channels
Select Magnitude squared coherence as Connectivity metric
Select Fourier transform as Time-freq decomposition method
Click in the [Edit] button to set the parameters of the Fourier transform:
Select Matlab's FFT defaults as Frequency definition
FT window length = 0.5 s
FT window overlap = 50%
Highest frequency of interest = 80 Hz
Time resolution = None
Estimate & save = across combined files/epochs
More details on the Coherence process can be found in the Connectivity Tutorial.
Add the process File > Add tag with the following parameters:
Tag to add = MEG sensors
Select Add to file name
- Run the pipeline:
|
|
|
Double-click on the resulting data node MSCoh-stft: EMGlft (185 files) | MEG sensors to display the MSC spectra. Click on the maximum peak in the 15 to 20 Hz range, and press Enter to display the spectrum from the selected sensor in a new window. This spectrum is that of channel MRC21, and shows a prominent peak at 17.58 Hz. Use the frequency slider (under the Time panel) to explore the MSC output across frequencies.
Right-click on the spectrum and select 2D Sensor cap for a topographical representation of the magnitude of the coherence results across the sensor array. You may also use the shortcut Ctrl-T. The sensor locations can be displayed with a right-click and by selecting Channels > Display sensors from the contextual menu (shortcut Ctrl-E).
We can now average the magnitude of the MSC across the beta band (15-20 Hz).
In the Process1 box, select the new mscohere file.Run process Frequency > Group in time or frequency bands:
Select Group by frequency bands
Type cmc_band / 15, 20 / mean in the text box.
The resulting MSCoh-stft:...|tfbands node contains one MSC value for each sensor (the MSC average in the 15-20 Hz band). Right-click on the file to display the 2D or 3D topography of the MSC beta-band measure.
Higher MSC values the EMG signal and MEG sensor signals map over the contralateral set of central sensors in the beta band. Sensor-level connectivity can be ambiguous to interpret anaotmically though. We will now map the magnitude of EMG-coherence across the brain (MEG sources).
Source estimation
MRI segmentation
We first need to extract the cortical surface from the T1 MRI volume we imported at the beginning of this tutorial. CAT12 is a Brainstorm pluing that will perform this task in 30-60min.
Switch back to the Anatomy view of the protocol ().
Right-click on the MRI () > MRI segmentation > CAT12:
Number of vertices: 15000
Anatomical parcellations: Yes
Cortical maps: No
Keep the low-resolution central surface selected as the default cortex (central_15002V). This surface is the primary output of CAT12, and is shown half-way between the pial envelope and the grey-white interface (more information). The head surface was recomputed during the process and duplicates the previous surface obtained above: you can either delete one of the head surfaces or ignore this point for now.
For quality control, double-click on the head and central_15002V surfaces to visualize them in 3D.
Head models
We will perform source modeling using a distributed model approach for two different source spaces: the cortex surface and the entire MRI volume. Forward models are called head models in Brainstorm. They account for how neural electrical currents produce magnetic fields captured by sensors outside the head, considering head tissues electromagnetic properties and geometry, independently of actual empirical measurements (more information). A distinct head model is required for the cortex surface and head volume source spaces.
Cortical surface
Go back to the Functional data view of the database.
Right-click on the channel file of the imported epoch folder > Compute head model.
Comment = Overlapping spheres (surface)
Source space = Cortex surface
Forward model = MEG Overlapping spheres.
Whole-head volume
Right-click on the channel file again > Compute head model.
Comment = Overlapping spheres (volume)
Source space = MRI volume
Forward model = Overlapping spheres.
Select Regular grid and Brain
Grid resolution = 5 mm
The Overlapping spheres (volume) head model is now added to the database explorer. The green color of the name indicates this is the default head model for the current folder: you can decide to use another head model available by double clicking on its name.
Noise covariance
The recommendation for MEG source imaging is to extract basic noise statistics from empty-room recordings. When not available, as here, resting-state data can be used as proxies for MEG noise covariance. We will use a segment of the MEG recordings, away from the task and major artifacts: 18s-29s.
Right-click on the clean continuous file > Noise covariance > Compute from recordings.
Right-click on the Noise covariance () > Copy to other folders.
Inverse models
We will now compute three inverse models, with different source spaces: cortex surface with constrained dipole orientations (normal to the cortex), cortex surface with unconstrained orientation, and MRI volume (more information).
Cortical surface
Right-click on Overlapping spheres (surface) > Compute sources:
Minimum norm imaging
Current density map
Constrained: Normal to the cortex
Comment = MN: MEG (surface)
Repeat the previous step, but this time select Unconstrained in the Dipole orientations field.
|
|
|
Whole-head volume
Right-click on the Overlapping spheres (volume) > Compute sources:
Current density map
Unconstrained
Comment = MN: MEG (volume)
Three imaging kernels () are now available in the database explorer. Note that each trial is associated with three source links (). The concept of imaging kernels is explained in Brainstorm's main tutorial on source mapping.
Coherence: EMG x Sources
We can now compute the coherence between the EMG signal and all brain source time series, for each of the source models considered. Let's start with the cortical surface/constrained model.
To select the source maps to include in the coherence estimation, click on the Search Database button (), and select New search. Set the parameters as shown below, and click on Search.
- This creates a new tab in the database explorer, showing only the files that match the search criteria.
Click the Process2 tab at the bottom of the main Brainstorm window.
Files A: Drag-and-drop the Left (192 files) group, select Process recordings ().
Files B: Drag-and-drop the Left (192 files) group, select Process sources ().
- Objective: Extract from the same files both the EMG recordings (Files A) and the sources time series (Files B), then compute the coherence measure between these two categories of time series. Note that the blue labels over the file lists indicate that there are 185 "good" files (7 bad epochs).
Select the process Connectivity > Coherence AxB [2023]:
Time window = All file
Source channel (A) = EMGlft
Uncheck Use scouts (B)
Check Flatten unconstrained source orientations
Click in the [Edit] button of the PCA options:
Select across all epochs/files as PCA method
PCA time window = All file
Check Remove DC offset
Baseline = All file
Ignore the Scout function options as we are not using scouts
Select Magnitude squared coherence as Connectivity metric
Select Fourier transform as Time-freq decomposition method
Click in the [Edit] button to set the parameters of the Fourier transform:
Select Matlab's FFT defaults as Frequency definition
FT window length = 0.5 s
FT window overlap = 50%
Highest frequency of interest = 80 Hz
Time resolution = None
Estimate & save = across combined files/epochs
Add the process File > Add tag:
Tag to add = (surface)(Constr)
Select Add to file name
- Run the pipeline
|
|
|
Repeat the steps above to compute the EMG-sources coherence for the other source models: surface/unconstrained and volume:
Edit the search criteria accordingly: Right-click on the search tab > Edit search.
- This updates automatically the file selection in the Process2 tab.
Select the processes as indicated above and update the file tag according to the type of sources.
The option Flatten unconstrained source orientations did not have any impact with constrained sources. However when used with unconstrained sources, these these sources are flattened before computing coherence. If the option is unchecked for unconstrained sources, coherence is computed between the sensor and each of the the three orientations per source, these resulting three coherence spectra are then collapsed into one using the maximum value for each frequency band. Find mode details in the connectivity tutorial
Close the search tab. If the 3 new connectivity files ) are not featured in the database explorer: refresh the dabase display by pressing [F5] or by clicking again on the selected button "Functional data".
Surface
Double-click on the 1xN connectivity files for the two (surface) source spaces (constrained and unconstrained source orientations) to visualize the cortical maps. See main tutorial Display: Cortex surface for all available options. Pick the cortical location and frequency with the highest coherence value.
In the Surface tab: Smooth=30%, Amplitude=0%.
To compare visually different cortex maps, set manually the colormap range (e.g.[0 - 0.07])
Explore with coherence spectra with the frequency slider
The highest coherence value is located at 14.65 Hz, in the right primary motor cortex (precentral gyrus). To observe the coherence spectrum at a given location: right-click on the cortex > Source: Power spectrum.
The constrained (top) and unconstrained (bottom) orientations coherence maps are qualitatively similar in terms of peak location and frequency. The unconstrained source map appears smoother because of the maximum aggregation across all three current directions at each brain location, as explained below. These maps results are similar to those obtained with the FieldTrip tutorial.
To obtain the 3D coordinates of the peak coherence location: right-click on the figure > Get coordinates. Then click on the right motor cortex with the crosshair cursor. Let's keep note of these coordinates for later comparision with the whole-head volume results below.
Volume
Double-click the 1xN connectivity file to display the (volume) source space.
- Set the frequency to 14.65 Hz, and the data transparency to 20% (Surface tab).
- Find the peak in the head volume by navigating in all 3 dimensions, or use the coordinates from the surface results as explained above.
Right-click on the figure > Anatomical atlas > None. The coherence value under the cursor is then shown on the top-right corner of the figure.
Method for constrained sources
For cortical and orientation-constrained sources, each vertex in the source grid is associated with 1 time series. As such, when coherence is computed with the EMG signal (also consisting of one time series), this result is 1 coherence spectrum per vertex. In other words, for each frequency bin, we obtain one coherence brain map.
Method for unconstrained sources
In the case of orientation-unconstrained (either surface or volume) sources, each vertex (source location) is associated with 3 time series, each one corresponding to the X, Y and Z directions. As such, the approach taken in this tutorial consisted in first reducing the 3D maps to a single orientation with the PCA method, this results in a source model similar to that with orientation-constrained sources, thus, coherence is then computed as with orientation-constrained (described above).
An alternative approach for addressing the issue of the 3-dimensional nature of coherence with unconstrained sources is to compute coherence between the EMG signal (one time series), the each of the 3 time series (one for each source orientation: x, y and z) associated with each source location, resulting in 3 coherence spectra. Then, these coherence spectra are collapsed into a single coherence spectrum by keeping the maximum coherence value across those in the three directions, for each frequency bin. The choice of the maximum statistic is empirical: it is not rotation invariant (if the specifications of the NAS/LPA/RPA fiducials change, the x,y,z axes also change, which may influence connectivity measures quantitatively). Nevertheless, our empirical tests showed this option produced smooth and robust (to noise) connectivity estimates.
Below the comparison of the coherence between the EMG sensor and unconstrained surface sources using PCA flattening in the source space (left) and "Flattening of the coherence, by max across dimensions"
Coherence: EMG x Scouts
We have now obtained coherence spectra at each of the 15002 brain source locations. This is a very large amount of data to shift through. We therefore recommend to further reduce the dimensionality of the source space by using a surface- or volume-parcellation scheme. In Brainstorm vernacular, this can be achieved via an atlas consisting of scout regions. See the Scout Tutorial for detailed information about atlases and scouts in Brainstorm. To compute scout-wise coherence, it is necessary to specify two parameters to indicate how the data is aggregated in each scout:
1. The scout function (for instance, the mean), and
2. When the scout aggregation procedure if performed, either before or after the coherence computation).
Method for constrained sources
Scout function applied before: The scout function (e.g., mean) is applied for all the source time series in the scout; resulting in 1 time series per scout. Thi scout time series is then used to compute coherence with the reference signal (here, with EMG) resulting in 1 coherence spectrum per scout.
Scout function applied after: With this option, coherence is computed between the reference signal (here, EMG) and each source time series in the scout, resulting in N coherence spectra, with N being the number of sources in the scout. Finally the scout function is applied to the resulting coherence spectra for each source within the scout. This is a considerably more greedy option that the "before" option above, as it computes coherence measures between the EMG signal and each of the 15000 elementary source signals. This requires more computing resources; see introduction for an example.
Method for unconstrained sources
When flattening is performed on unconstrained sources, coherence computations is carried out as described in the ?previous section. However, if the sources are not flattened before computing coherence:
Scout function applied before: The scout function (e.g., mean) is applied for each direction on the elementary source time series in the scout; resulting in 3 time series per scout, one time series per orientation (x, y, z). These scout time series are then used to compute coherence with the reference signal (here, with EMG), and the coherence spectra for each scout are aggregated across the x, y and z dimensions, with the max function as shown previously, to obtain 1 coherence spectrum per scout.
Scout function applied after: With this option, coherence is computed between the reference signal (here, EMG) and each elementary sources (number of sources in each scout x 3 orientations). Then the coherence spectra for each elementary source are aggregated across the x, y and z dimensions. Finally the scout function is applied to the resulting coherence spectra for each elementary source within the scout. This is a considerably more greedy option that the "before" option above, as it computes coherence measures between the EMG signal and each of the 45000 elementary source signals. This requires more computing resources; see introduction for an example.
Here, we will compute the coherence from scouts, using mean as scout function with the before option. We will use the Schaefer 100 parcellation atlas on to the orientation-constrained source map.
Use Search Database () to select the Left trials with their respective (surface)(Constr) source maps, as shown in the previous section.
In the Process2: Left trial group into both the Files A and Files B boxes. Select Process recordings () for Files A, and Process sources () for Files B.
Open the Pipeline editor:
Add the process Connectivity > Coherence AxB [2023] with the following parameters:
Time window = All file
Source channel (A) = EMGlft
Check Use scouts (B)
From the menu at the right, select Schaefer_100_17net
- Select all the scouts
Scout function: Apply: before
Scout function: mean
Select Magnitude squared coherence as Connectivity metric
Select Fourier transform as Time-freq decomposition method
Click in the [Edit] button to set the parameters of the Fourier transform:
Select Matlab's FFT defaults as Frequency definition
FT window length = 0.5 s
FT window overlap = 50%
Highest frequency of interest = 80 Hz
Time resolution = None
Estimate & save = across combined files/epochs
Add the process File > Add tag with the following parameters:
Tag to add = (surface)(Constr)
Select Add to file name
- Run the pipeline
|
|
|
Double-click on the new file: the coherence spectra are not displayed on the cortex; they are plotted for each scout. The 1xN connectivity file can also be shown as an image as displayed below:
|
|
|
Note that at 14.65 Hz, the two highest peaks correspond to the SomMotA_4 R and SomMotA_2 R scouts, both located over the right primary motor cortex.
The brain parcellation scheme is the user's decision. We recommend the Schaefer100 atlas used here by default.
Additional documentation
Articles
Conway BA, Halliday DM, Farmer SF, Shahani U, Maas P, Weir AI, et al.
Synchronization between motor cortex and spinal motoneuronal pool during the performance of a maintained motor task in man.
The Journal of Physiology. 1995 Dec 15;489(3):917–24.Kilner JM, Baker SN, Salenius S, Hari R, Lemon RN.
Human Cortical Muscle Coherence Is Directly Related to Specific Motor Parameters.
J Neurosci. 2000 Dec 1;20(23):8838–45.Liu J, Sheng Y, Liu H.
Corticomuscular Coherence and Its Applications: A Review.
Front Hum Neurosci. 2019 Mar 20;13:100.Sadaghiani S, Brookes MJ, Baillet S.
Connectomics of human electrophysiology.
NeuroImage. 2022 Feb;247:118788.
Tutorials
Tutorial: Functional connectivity
Tutorial: Source estimation
Tutorial: Volume source estimation
Tutorial: Scouts
Tutorial: Connectivity graphs
Scripting
The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: brainstorm3/toolbox/script/tutorial_coherence.m