25565
Comment:
|
25054
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Yokogawa/KIT tutorial = ==== [TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE] ==== ''Authors: Francois Tadel, Yasuhiro Haruta, Ei-ichi Okumura, Takashi Asakawa.'' |
= Yokogawa/KIT tutorial: Median nerve stimulation = ''Authors: Francois Tadel, Yasuhiro Haruta, Ei-ichi Okumura, Takashi Asakawa'' |
Line 7: | Line 6: |
<<TableOfContents(2,2)>> | Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the [[http://neuroimage.usc.edu/brainstorm/Tutorials#Get_started|introduction tutorials]]. <<TableOfContents(3,2)>> |
Line 15: | Line 16: |
* Unilateral median nerves were percutaneously stimulated using monophasic square-wave impulses with a duration of 0.3 ms at 2.8 Hz. | * Right median nerves were percutaneously stimulated using monophasic square-wave impulses with a duration of 0.3 ms at 2.8 Hz. |
Line 19: | Line 20: |
* The data was recorded with a Yokogawa 160 axial gradiometer system at Yokogawa Electric Corporation, Kanazawa, Japan. | * The MEG data was recorded with a sampling rate of 2000 Hz and a bandpass filter at 0.16-500 Hz with a Yokogawa 160 axial gradiometer system at Yokogawa Electric Corporation, Kanazawa, Japan. * The EEG data was recorded with a NIHON KOHDEN system simultaneously with the MEG recordings. |
Line 28: | Line 30: |
'''Export the digitizer file''' * If a wave data file and the corresponding digitizer file are ready for use, there is no additional operation required. |
==== Export the digitizer file ==== * If a data file and the corresponding digitizer file are ready for use, there is no additional operation required. |
Line 36: | Line 37: |
'''Export the recordings''' |
==== Full head shape in the the digitizer file ==== In order to realize a precise MRI registration or for warping the default anatomy, you should collect 100 to 200 points describing the entire head shape in addition to the 8 Yokogawa/KIT standard stylus points. To import additional digitized points, follow the instruction below: * When digitizing head points: * Pick the 8 standard stylus points * Pick additional 100 to 200 head points, so that the selected points cover the entire head * Edit the digitizer label file (DigitizeLabel.txt) which is used in "Third-party export" so that it defines the 8 points and the additional points. * Note that the number of additional labels should be >= the number of digitized points. * As for the new label names, any names can be used if the names are not duplicated. * The following is an example: {{{ fidt9 HPI_1 HPI_4 HPI_3 HPI_5 HPI_2 fidt10 fidnz ETC001 ETC002 ETC003 ... ETC[nnn] }}} * Where [nnn]+8 should be >= the number of digitized points. * Use the modified digitizer label file with the digitizer data in the "Third-party export" function. ==== Export the recordings ==== |
Line 44: | Line 72: |
* Enter the label file in the [Label Filename] box<<BR>>(!DigitizeLabel.txt generally located in the "C:\Meg160\!AppInfo" folder) | * Enter the label file in the [Label Filename] box<<BR>>(DigitizeLabel.txt generally located in the "C:\Meg160\AppInfo" folder) |
Line 52: | Line 80: |
==== Alternative ==== Some older versions of the Yokogawa/KIT software do not inlcude this export menu "Third-party export". In this case, you can pass the digitizer information to Brainstorm with three additional files, saved in the '''same folder''' as the .sqd file containing the MEG recordings you want to process. The folder must contain one file only for each type of information, therefore you cannot save multiple runs or subjects in the same folder, you must create one subfolder per acquisition run. * '''*_Marker1_*''' or '''*.mrk''': File with extension .mrk or .sqd with the HPI coils in MEG device coordinates * '''*_Points.txt''' or '''*.elp''': Polhemus file with the fiducials and HPI coils in digitizer coordinates. * '''*_HS.txt''' or '''*.hsp''': Polhemus file with the head shape points in digitizer coordinates. |
|
Line 54: | Line 89: |
* Go to the [[http://neuroimage.usc.edu/brainstorm3_register/download.php|Download]] page of this website, and download the file: '''sample_yokogawa.zip''' | * Go to the [[http://neuroimage.usc.edu/bst/download.php|Download]] page of this website, and download the file: '''sample_yokogawa.zip''' |
Line 57: | Line 92: |
* Select the menu File > Create new protocol. Name it "'''!TutorialYokogawa'''" and select the options: | * Select the menu File > Create new protocol. Name it "'''TutorialYokogawa'''" and select the options: |
Line 62: | Line 97: |
* Right-click on the !TutorialYokogawa folder > New subject > '''Subject01''' | * Right-click on the TutorialYokogawa folder > New subject > '''Subject01''' |
Line 65: | Line 100: |
* Set the file format: "!FreeSurfer folder" | * Set the file format: "FreeSurfer folder" |
Line 68: | Line 103: |
* Click on the link "'''Click here to compute MNI transformation'''". | |
Line 72: | Line 108: |
* AC: x=128, y=148, z=116 * PC: x=128, y=123, z=117 * IH: x=128, y=131, z=176 (anywhere on the midsagittal plane) * At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, which will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.<<BR>><<BR>> {{attachment:anatomy.gif||height="195",width="346"}} ==== Without the individual MRI ==== If you do not have access to an individual MR scan of the subject (or if its quality is too low to be processed with !FreeSurfer), but if you have digitized the head shape of the subject using a tracking system, you have an alternative: deform one of the Brainstorm templates (Colin27 or ICBM152) to match the shape of the subject's head.<<BR>>For more information, read the following tutorial: [[Tutorials/TutWarping|Warping default anatomy]] |
* At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, which will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.<<BR>><<BR>> {{attachment:anatomy.gif||width="346",height="195"}} |
Line 81: | Line 111: |
==== Link the recordings ==== * Switch to the "functional data" view. * Right-click on the subject folder > Review raw file: |
=== Link the recordings === * Switch to the "functional data" view, the middle button in the toolbar above the database explorer. * Right-click on the subject folder > '''Review raw file''': |
Line 90: | Line 120: |
<<BR>> {{attachment:review_raw.gif||height="157",width="416"}} ==== Prepare the channel file ==== |
<<BR>> {{attachment:review_raw.gif||width="416",height="157"}} === Prepare the channel file === |
Line 109: | Line 139: |
<<BR>> {{attachment:edit_channel.gif||height="167",width="512"}} ==== Refine the MRI registration ==== |
<<BR>> {{attachment:edit_channel.gif||width="512",height="167"}} === Refine the MRI registration === |
Line 116: | Line 146: |
* If you feel like you made it wrong, close the figure and cancel the modifications, then try again. It takes a few trials to get used to this rotation/translation interface. | * If you feel like you didn't do this correctly, close the figure and cancel the modifications, then try again. It takes a few trials to get used to this rotation/translation interface. |
Line 123: | Line 153: |
==== Read the stimulation information ==== | === Read the stimulation information === |
Line 126: | Line 156: |
* You can see that all the trigger lines are flat except for "Trigger01", which contains the information of the electric stimulation. We are going to read this trigger channel and convert it to a list of events. Close this figure.<<BR>><<BR>> {{attachment:triggers_display.gif||height="165",width="447"}} * Drag and drop the "Link to raw file" in the Process1 box. * Run the process "Import recordings > Read from channel" |
* You can see that all the trigger lines are flat except for "Trigger01", which contains the information of the electric stimulation. We are going to read this trigger channel as a list of events. <<BR>><<BR>> {{attachment:triggers_display.gif}} * In the Record tab, menu '''File > Read events from channel'''. |
Line 130: | Line 159: |
* Option selected "'''TTL'''": detect peaks of 5V/12V on an analog channel * Do not select the option Accept zeros as trigger values<<BR>><<BR>> {{attachment:triggers_read.gif||height="260",width="302"}} * Right-click on the "Link to raw file" > Trigger > Display time series. Check that the peaks of the triggers channel have been correctly identified, then close this figure.<<BR>><<BR>> {{attachment:triggers_check.gif||height="162",width="369"}} == Artifacts: Evaluate the power spectrum == ==== Evaluation ==== Two of the typical pre-processing steps consist in getting rid of the contamination due to the power lines (50 Hz or 60Hz) and of the frequencies we are not interested in (a low-pass filter to remove the high-frequencies and a high-pass filter to remove the very slow components of the signals). Let's start with the spectral evaluation of this file. * Drag the "Link to raw file" to the Process1 box and run the process "Frequency > Power spectrum density (Welch)". Configure it as illustrated in the following figure (window length=10s, overlap=50%).<<BR>><<BR>> {{attachment:psd_process.gif||height="270",width="296"}} * Double-click on the new PSD file to display it.<<BR>><<BR>> {{attachment:psd_result.gif||height="241",width="649"}} |
* Option selected "'''TTL'''": detect peaks of 5V/12V on an analog channel. * Do not select the option Accept zeros as trigger values<<BR>><<BR>> {{attachment:triggers_read.gif}} * Check that the peaks of the triggers channel have been correctly identified, then close this figure.<<BR>><<BR>> {{attachment:triggers_check.gif}} == Pre-processing == === Evaluate the recordings === * Drag and drop the "Link to raw file" into the Process1 list. * Select the process "'''Frequency > Power spectrum density'''", configure it as follows:<<BR>><<BR>> {{attachment:psd_process.gif||width="313",height="289"}} * Double-click on the new PSD file to display it.<<BR>><<BR>> {{attachment:psd_result.gif||width="394",height="183"}} |
Line 143: | Line 171: |
* Peaks at 60Hz, 120Hz, 180Hz, 240Hz on EEG + MEG: <<BR>>Electric contamination due to the power lines (60Hz+harmonics) * Peaks at 35Hz, 65Hz, 70Hz, 183Hz, 197Hz on MEG only: <<BR>>Electric noise coming from an unknown source * MEG sensor '''LC11 '''appears to have a higher level of noise than all the other MEG sensors, we will check this when review the MEG recordings and probably tag it as a bad channel. ==== Correction ==== * From the Yokogawa software: * Users can apply digital filtering to the data before exporting them to Brainstorm. Depending on the data size and degrees of filter parameters, it sometime takes a very long time. * From Brainstorm: * There is no easy way to process the Yokogawa files at this time using the Brainstorm software. This capability might be added in the future, in which case the procedure would be very similar to the one presented in the basic tutorial [[Tutorials/TutRawSsp|Detect and remove artifacts]]. * For short recordings, if the contamination is really bad, there is an alternative option: importing the entire file in Brainstorm and then process it. This approach is illustrated in the [[Tutorials/Epilepsy|EEG/Epilepsy tutorial]]. This approach requires a very large amount of memory, it is not recommended on MEG recordings as the files tend to be much bigger than EEG-only recordings. * For now we are going to keep the file as it is. There is no important contamination below 60Hz and we are going to study processes that occur at lower frequencies. When studying evoked responses, the electric noise at 60Hz tends to cancel out and is usually not a problem in the analysis. Note for later that it is however important to consider this electric noise when working on resting state recordings or time-frequency decompositions at higher frequencies. == Artifacts: Heartbeats and eye blinks == ==== Detect artifacts and calculate SSP ==== * Select the "Link to raw file" in the Process1 tab, then create the following analysis pipeline * Process "Artifacts > Detect eye blinks" on channel '''EO2''' (the signal is better than EOG1) * Process "Artifacts > Detect heartbeat" on channel '''EKG+''' * Process "Artifacts > SSP: Eye blinks" for '''MEG''' * Process "Artifacts > SSP: Eye blinks" for '''EEG''' * Process "Artifacts > SSP: Heartbeat" for '''MEG''' * Process "Artifacts > SSP: Heartbeat" for '''EEG''' * Run the pipeline<<BR>><<BR>> {{attachment:ssp_process.gif||height="281",width="287"}} ==== Review the results ==== * Double-click on the "Link to raw file" to show the MEG sensors * In the Record tab, menu SSP > '''Select active projectors'''<<BR>><<BR>> {{attachment:ssp_select.gif||height="176",width="355"}} * '''Blink: MEG''': Select component #1<<BR>><<BR>> {{attachment:ssp_blink_meg.gif||height="143",width="145"}} * '''Blink: EEG''': Select component #1<<BR>><<BR>> {{attachment:ssp_blink_eeg.gif||height="143",width="145"}} * '''Cardiac: MEG''': Select component #1<<BR>><<BR>> {{attachment:ssp_cardiac_meg.gif||height="143",width="145"}} * '''Cardiac: EEG''': Select component #1 and #2<<BR>><<BR>> {{attachment:ssp_cardiac_eeg1.gif||height="143",width="145"}} {{attachment:ssp_cardiac_eeg2.gif||height="143",width="145"}} ==== Effect on the artifacts ==== * If we epoch the recordings based on the blink and cardiac artifacts and average the results, we can evaluate what was the influence of the heartbeats and eye blinks and how much of the artifact is removed for each modality with the SSP projections. * You do not need to do this now, but it is an interesting way of studying these artifacts.<<BR>><<BR>> {{attachment:ssp_db.gif||height="303",width="198"}} * Ocular artifact for MEG, before/after correction:<<BR>><<BR>> {{attachment:ssp_result_blink_meg.gif||height="209",width="322"}} * Ocular artifact for EEG, before/after correction:<<BR>><<BR>> {{attachment:ssp_result_blink_eeg.gif||height="209",width="322"}} * Cardiac artifact for MEG, before/after correction:<<BR>><<BR>> {{attachment:ssp_result_cardiac_meg.gif||height="209",width="322"}} * Cardiac artifact for EEG, before/after correction:<<BR>><<BR>> {{attachment:ssp_result_cardiac_eeg.gif||height="209",width="322"}} |
* Peaks at 60Hz, 120Hz, 180Hz, 240Hz on EEG + MEG: Power lines (60Hz+harmonics) * Peaks at 35Hz, 65Hz, 70Hz, 183Hz, 197Hz on MEG only: Noise coming from an unknown source * MEG sensor '''LC11 '''(in red) appears to have a higher level of noise than all the other MEG sensors, we will check this when review the MEG recordings and probably tag it as a bad channel. * If we review quickly the EEG and EOG signals, we quickly note that there are '''a lot of eye movements''' in these recordings. The subject is moving the eyes (blinks and slow movements), maybe because there was no fixation cross for this experiment. We will apply at least a high-pass filter to make the signals easier to process (we are not interested by very low frequencies in this experiment). <<BR>><<BR>> {{attachment:eye_movements.gif||width="421",height="234"}} === Frequency filters === * In Process1, select the "Link to raw file". * Select process '''Pre-process > Band-pass filter''': Frequency='''[0.5, 200]Hz''', Sensors='''MEG,EEG''' * Add processs '''Pre-process > Notch filter''': Frequencies='''[60 120 180]Hz''', Sensors='''MEG,EEG''' <<BR>><<BR>> {{attachment:filters.gif||width="568",height="321"}} * In Process1, select the filtered file "Raw | Band | Notch". * Run the process "'''Frequency > Power spectrum density'''", with the same options as before. <<BR>><<BR>> {{attachment:psd2.gif||width="648",height="242"}} |
Line 183: | Line 184: |
==== MEG and montages ==== * Right-click on the "Link to raw file" > MEG > Display time series. |
=== MEG: Default montages === Pre-defined selections of sensors are available to help reviewing the MEG recordings. * Right-click on the filtered file "Raw|band|notch" > MEG > Display time series. |
Line 187: | Line 190: |
* Select the montage "KIT LC"<<BR>><<BR>> {{attachment:select_kit_lc.gif||height="193",width="168"}} ==== Load KIT montages ==== |
* Select the montage "KIT LC"<<BR>><<BR>> {{attachment:select_kit_lc.gif||width="168",height="193"}} |
Line 197: | Line 199: |
* Click on save to close the montage editor.<<BR>><<BR>> {{attachment:load_montages.gif||height="199",width="532"}} ==== Bad channels ==== |
* Click on save to close the montage editor.<<BR>><<BR>> {{attachment:load_montages.gif||width="532",height="199"}} === MEG: Bad channels === |
Line 201: | Line 203: |
* Press the delete key or right-click in the figure > Channels > '''Mark selected as bad'''.<<BR>><<BR>> {{attachment:set_bad_channel.gif||height="212",width="497"}} | * Press the delete key or right-click in the figure > Channels > '''Mark selected as bad'''.<<BR>><<BR>> {{attachment:set_bad_channel.gif||width="497",height="212"}} |
Line 204: | Line 206: |
==== EEG ==== * Right-click on the "Link to raw file" > EEG > Display time series. * Check the list of available EEG montages.<<BR>><<BR>> {{attachment:review_eeg.gif||height="225",width="499"}} ==== More information ==== We are not going to detail more in details the interface to review MEG or EEG recordings as it is already done in other specialized tutorials: * For continuous MEG: [[Tutorials/TutRawViewer|Review continuous recordings and edit markers]] * For continuous EEG: [[Tutorials/Epilepsy|EEG and epilepsy]] * For imported files: [[Tutorials/TutExploreRecodings|Exploring the recordings]] |
=== EEG: Average reference === * Right-click on the filtered file "Raw|band|notch" > EEG > Display time series. * Check the list of available EEG montages. In the Record tab, increase the length of the displayed time window to''' 10s'''. In the original raw file the channel TP9 looks suspicious, but it looks good in this filtered file. <<BR>><<BR>> {{attachment:review_eeg.gif||width="499",height="225"}} * In the Record tab, menu '''Artifacts > Re-reference EEG''' > "AVERAGE".<<BR>><<BR>> {{attachment:channel_ref.gif||width="573",height="289"}} * At the end, the window "Select active projectors" is open to show the new re-referencing projector. Just close this window. To get it back, use the menu Artifacts > Select active projectors. == Artifacts cleaning with ICA == === Detect heartbeats and blinks === * Right-click on the filtered file "Raw|band|notch" > EEG > Display time series. * In the Record tab, menu '''Artifacts > Detect heartbeat''': channel '''EKG+''' * In the Record tab, menu '''Events > Detect eye blinks''': channel '''EO2''' (better signal than EO1) * In the Record tab, menu '''Events > Remove simultaneous''': "cardiac" / "blink" / 250ms. * The heartbeats are correctly detected, but not the eye movements. There are not only blinks but also a lot of slow eye movements in these recordings. We will prefer an ICA approach, rather than a SSP decomposition which is more adapted to recordings with only clean blinks.<<BR>> [[http://neuroimage.usc.edu/brainstorm/Tutorials/Epilepsy#Artifact_cleaning_with_ICA|More information about ICA]]. <<BR>><<BR>> {{attachment:detect.gif||width="668",height="185"}} === EEG: Heartbeats and eye movements === * Right-click on the filtered file "Raw|band|notch" > EEG > Display time series. * In the Record tab, select the menu '''Artifacts > ICA components'''. Configure as illustrated below, make sur you do not select the option "Use existing SSP/ICA" (to ignore the average reference). <<BR>><<BR>> {{attachment:ica_eeg_process.gif||width="516",height="384"}} * Identify the components that show topographies related with '''eye movements''' and that are associated with time series visibly correlated with the EOG signals. You should be able to identify easily a component corresponding with the '''heartbeats '''as well.<<BR>><<BR>> {{attachment:ica_eeg_result1.gif||width="642",height="364"}} * Select all these components to remove them from the signals and click on [Save]. <<BR>><<BR>> {{attachment:ica_eeg_result2.gif||width="639",height="236"}} === MEG: Heartbeats and eye movements === * Right-click on the filtered file "Raw|band|notch" > EEG > Display time series. * In the Record tab, select the menu Artifacts > ICA components. Configure as illustrated below. You can limit the number of output components to 20 or 40, it should be enough to extract strong components like the ocular and cardiac artifacts. You may get more accurate results with a full decomposition, but it would take much longer.<<BR>><<BR>> {{attachment:ica_meg_process.gif||width="568",height="370"}} * Identify the components corresponding to the '''eye movements''' and the '''heartbeats'''. * Select these components to remove them from the signals and click on [Save].<<BR>><<BR>> {{attachment:ica_meg_result1.gif||width="626",height="286"}} |
Line 216: | Line 235: |
==== Import recordings ==== | === Import the recordings === |
Line 219: | Line 238: |
* Right-click on the Link to raw file > Import in database: * Select the entire time definition (0s to 120s, default) * Check "'''Use events'''" and select "'''Trigger01'''" * Epoch time: '''[-50, 250]''' '''ms''' * Check '''"Apply SSP"<<BR>>''' * Check "'''Remove DC offset'''" > Time range > '''[-50, -10] ms'''<<BR>><<BR>> {{attachment:import_options.gif||height="274",width="437"}} |
* Right-click on the filtered file "Raw|band|notch" > '''Import in database''':<<BR>>Check "'''Use events'''" and select "'''Trigger01'''", Epoch time: '''[-50, 250]''' '''ms''', Apply SSP/ICA.<<BR>> No need to remove the DC offset, we have already applied a high-pass filter to the recordings.<<BR>><<BR>> {{attachment:import_options.gif||width="656",height="291"}} |
Line 227: | Line 242: |
==== Average epochs ==== | === Averaging === |
Line 229: | Line 244: |
* Run the process "Average > Average files".<<BR>><<BR>> {{attachment:average_process.gif||height="359",width="243"}} * Review the average for the MEG and the EEG.<<BR>><<BR>> {{attachment:average_result.gif||height="166",width="597"}} == Source analysis == ==== Head model ==== |
* Run the process '''Average > Average files''': By trial group (folder average)<<BR>><<BR>> {{attachment:average_process.gif||width="452",height="485"}} * Review the average for the '''MEG'''.<<BR>><<BR>> {{attachment:average_result.gif||width="654",height="182"}} \ * Similar results for the '''EEG'''.<<BR>><<BR>> {{attachment:average_result_eeg.gif||width="654",height="182"}} == Source estimation == === Head model === |
Line 235: | Line 251: |
* Use '''1922 vertices '''for each layer (default). <<BR>><<BR>> {{attachment:bem.gif||height="255",width="518"}} * Go back to the "Functional data" view, right-click on the channel file > '''Compute head model'''.<<BR>><<BR>> {{attachment:headmodel1.gif||height="202",width="197"}} * Leave all the default options for the head model (cortex surface, MEG=Overlapping, EEG=OpenMEEG). Then leave all the OpenMEEG options to their defaults except for one: select the option "'''Use adjoint formulation'''".<<BR>><<BR>> {{attachment:headmodel2.gif||height="294",width="496"}} * If you can't download OpenMEEG automatically, download and install it manually (menu Help). |
* Use '''1922 vertices '''for each layer (default). <<BR>><<BR>> {{attachment:bem.gif||width="518",height="255"}} * Go back to the "Functional data" view, right-click on the channel file > '''Compute head model'''.<<BR>><<BR>> {{attachment:headmodel1.gif||width="197",height="202"}} * Leave all the default options for the head model (cortex surface, MEG=Overlapping, EEG=OpenMEEG). Then leave all the OpenMEEG options to their defaults except for one: select the option "'''Use adjoint formulation'''".<<BR>><<BR>> {{attachment:headmodel2.gif||width="496",height="294"}} * If the automatic download doesn't work, download OpenMEEG and install it manually (menu Help). |
Line 240: | Line 256: |
* If you cannot get OpenMEEG to work, or if the results definitely do not make sense, try using a different forward model: "'''3-shell sphere'''". It's a spherical model, so it would perform better in the regions of the head that are close to the sphere. See the [[Tutorials/TutHeadModel|Head model tutorial]]. ==== Noise covariance matrix ==== |
* If you cannot get OpenMEEG to work, or if the results definitely do not make sense, try using a different forward model: "'''3-shell sphere'''". It's a spherical model, so it would perform better in the regions of the head that are close to the sphere. See the [[Tutorials/HeadModel|Head model tutorial]]. === Noise covariance matrix === |
Line 244: | Line 260: |
* Right-click on the Trigger01 epochs group > Noise covariance > '''Compute from recordings'''.<<BR>><<BR>> {{attachment:noisecov1.gif}} * Enter the same baseline interval we used for removing the DC offset: '''[-50, -10] ms''' <<BR>><<BR>> {{attachment:noisecov2.gif}} ==== Inverse model ==== |
* Right-click on the Trigger01 epochs group > Noise covariance > '''Compute from recordings'''. <<BR>>Enter the same baseline interval we used for removing the DC offset: '''[-50, -10] ms'''<<BR>><<BR>> {{attachment:noisecov1.gif||width="319",height="221"}} {{attachment:noisecov2.gif||width="321",height="314"}} === Inverse model === |
Line 249: | Line 264: |
* Leave all the default options to calculate a wMNE solution with constrained dipole orientation.<<BR>><<BR>> {{attachment:inverse1.gif}} * This operation creates a shared inversion kernel and one source link for each block of recordings in the folder. If you are not familiar with those concepts, please refer to the [[Tutorials/TutSourceEstimation|Source estimation tutorial]]. <<BR>><<BR>> {{attachment:inverse2.gif}} * Double-click on the sources file for the average to open it. Configure its display: surface smoothing, amplitude threshold, colormap. For a reminder on how to manipulate the colormaps, see the tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/TutExploreRecodings#Colormap_configuration|Explore recordings]]. * Make sure that all the visualization filters are turned OFF. * Double-click on the recordings as well to get a visual reference of the time. In the Record tab, you can click on the "Display mode" button in the toolbar to switch back to the "butterfly" view.<<BR>><<BR>> {{attachment:inverse3.gif}} * To display the same information re-interpolated in the volume, right-click on the source file > Cortical activation > Display on MRI.<<BR>><<BR>> {{attachment:inverse4.gif}} ==== Z-score normalization ==== * A good way to reveal better the source activity at the cortex level is to calculate a Z-score of the source maps with respect with a quiet baseline. We can use the same baseline as for the calculation of the noise covariance matrix. * Run the process: '''Standardize > Z-score (static)''', with the baseline '''[-50,-10]ms''' * Double-click on the new file to display it<<BR>><<BR>> {{attachment:zscore.gif}} ==== Regions of interest ==== * Create two scouts |
* Select wMNE and '''MEG'''. When asked, leave the default list of bad channels (31: channel LC11)<<BR>><<BR>> {{attachment:inverse1.gif||width="493",height="214"}} * Repeat the same operation for '''EEG'''. It is better to study separately the two modalities because the method for combining MEG and EEG are not working well yet.<<BR>><<BR>> {{attachment:inverse2.gif||width="201",height="227"}} * This operation creates a shared inversion kernel and one source link for each block of recordings in the folder. If you are not familiar with those concepts, please refer to the [[Tutorials/SourceEstimation|Source estimation tutorial]]. * Display the sources for the '''MEG''' (top) and the '''EEG''' (bottom).<<BR>><<BR>> {{attachment:inverse4.gif||width="614",height="303"}} === Regions of interest === * Create two scouts S1 and S2 to represent the primary and secondary somatosensory cortex of the left hemisphere. * Open the source file for the MEG average and the corresponding recordings. * Go to 19ms, adapt the amplitude threshold and the colormap to see only a focal source. * Create a scout at the center of the activated region. * Grow the scout to about 20 vertices and rename it to "S1". * Go to 50ms and repeat the same operation for S2. In this specific case, S2 does not appear a clear independent region, as we still see stronger activities in more superior regions of the brain. So pick one activated region around the anatomical location of the secondary somatosensory cortex (at the very bottom of the post-central gyrus). See the picture below. * For more information on the scouts, please refer to the [[Tutorials/Scouts|scouts tutorial]]. * Then plot the activity for the different files we calculated.<<BR>><<BR>> {{attachment:scouts.gif||width="236",height="171"}} * Scouts time series for the MEG (left) and the EEG (rights).<<BR>><<BR>> {{attachment:scouts_view.gif||width="664",height="176"}} |
Line 267: | Line 281: |
To reproduce this entire analysis with a script, use the following processes: * Import recordings > Import anatomy folder * Subject name: '''Subject01''' * Folder to import: '''sample_yokogawa/anatomy/freesurfer''' (format = FreeSurfer) * Enter the fiducials positions indicated at the beginning of the tutorial * Because of the selection of those points, it is usally better to run this first step in interactive mode. * Import anatomy > Generate BEM surfaces * Set all the values to 1922 and the skull thickness to 4mm * Import recordings > Create link to raw file * File to import: '''sample_yokogawa/data/SEF_000-export.con''' (format = "MEG/EEG: Yokogawa/KIT" ) * Uncheck the option "Align sensors using headpoints" * Because we performed a manual adjustment of the MRI/MEG registration, this step should also be done manually here. * Import recordings > Set channels type * Channel types or names: '''SP1, SP2, RS, PHO, DELR, DELL, QR, QL''' * Import recordings > Events: Import from file * Event file: '''sample_epilepsy/data/tutorial_spikes.txt''': (format = Array of times) * Event name: '''SPIKE''' * Import recordings > Import MEG/EEG: Time * Condition name: Empty * Time window: default (entire file) * Split recordings in time blocks: 0s * Pre-process > Band-pass filter: * Lower cutoff frequency: '''0.5 Hz''' * Upper cutoff frequency: 0 * Sensor types: Empty * Check: '''Overwrite input file''' * Sources > Compute noise covariance * Baseline: '''[120s - 130s]''' * Remove DC offset: Block by block * Ouptut: Full noise covariance matrix * Import recordings > Import MEG/EEG: Events * Condition name: Empty * Event names: '''SPIKE''' * Time window: default (entire file) * Epoch time: ['''-101.6s, 300.8s'''] * Check all the additional options * Average > Average files * By condition (subject average) * Arithmetic average * Uncheck: Keep all the events * File > Save snapshot * Snapshot: Sensors/MRI registration * Sensor type: EEG * Orientation: Left * Comment: "Average spike" * File > Save snapshot * Snapshot: Recordings time series * Sensor type: EEG * File > Save snapshot * Snapshot: Recordings topography (contact sheet) * Sensor type: EEG * Contact sheet (start time, stop time): "-40 110" * Contact sheet (number of images): 16 * Sources > Compute head model * EEG Method: OpenMEEG BEM * Edit OpenMEEG options * BEM Layers: Select all (scalp, skull, brain) * Check: Use adjoint formulation * Check: Use adaptive integration * Sources > Compute sources * wMNE * Sensor types: EEG * Kernel onle: Shared * File > Save snapshot * Snapshot: Sources (one time) * Orientation: top * Time: 0s ==== Graphic edition ==== ==== Generate Matlab script ==== == Feedback == |
The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: [[https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/script/tutorial_yokogawa.m|brainstorm3/toolbox/script/tutorial_yokogawa.m]] <<HTML(<div style="border:1px solid black; background-color:#EEEEFF; width:720px; height:500px; overflow:scroll; padding:10px; font-family: Consolas,Menlo,Monaco,Lucida Console,Liberation Mono,DejaVu Sans Mono,Bitstream Vera Sans Mono,Courier New,monospace,sans-serif; font-size: 13px; white-space: pre;">)>><<EmbedContent("http://neuroimage.usc.edu/bst/viewcode.php?f=tutorial_yokogawa.m")>><<HTML(</div >)>> |
Yokogawa/KIT tutorial: Median nerve stimulation
Authors: Francois Tadel, Yasuhiro Haruta, Ei-ichi Okumura, Takashi Asakawa
This tutorial introduces some concepts that are specific to the management of MEG/EEG files recorded with Yokogawa/KIT systems in the Brainstorm environment.
Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the introduction tutorials.
Contents
License
This tutorial dataset (MEG/EEG and MRI data) remains proprietary of Yokogawa Electric Corporation, Japan. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from Yokogawa Electric Corporation.
Description of the experiment
This tutorial is based on a simple median nerve stimulation experiment:
- Right median nerves were percutaneously stimulated using monophasic square-wave impulses with a duration of 0.3 ms at 2.8 Hz.
- The stimulus intensity was set at the motor threshold to evoke mild twitches of the thumb.
- The stimulus onsets were recorded as low-to-high TTL with a trigger channel labeled as "Trigger01".
- The total number of stimuli in the dataset was 336.
- The MEG data was recorded with a sampling rate of 2000 Hz and a bandpass filter at 0.16-500 Hz with a Yokogawa 160 axial gradiometer system at Yokogawa Electric Corporation, Kanazawa, Japan.
- The EEG data was recorded with a NIHON KOHDEN system simultaneously with the MEG recordings.
Export recordings from Meg160
To import Yokogawa/KIT data files (.con, .raw, .ave) into Brainstorm, a data export process is required beforehand. The data export function is available in Meg160, which is data analysis software equipped in most of Yokogawa/KIT systems.
The dataset used in this tutorial has already been exported using this procedure. It is described here so that later you can export your own recordings to Brainstorm.
If your software does not support the functions used below, please contact Yokogawa via
http://www.yokogawa.com/me/index.htm
Export the digitizer file
- If a data file and the corresponding digitizer file are ready for use, there is no additional operation required.
- If no digitizer file is available, you need first to extract the head surface points:
In Meg160, select the menu: File > Import and Export > BESA Text Export > Surface Point File
- Check that the fiducial points are properly pointed, and then click the [OK] button
- A surface point file (.sfp) is automatically created. It includes the position data of: the fiducial points, the marker points and other points describing the head shape
Full head shape in the the digitizer file
In order to realize a precise MRI registration or for warping the default anatomy, you should collect 100 to 200 points describing the entire head shape in addition to the 8 Yokogawa/KIT standard stylus points. To import additional digitized points, follow the instruction below:
- When digitizing head points:
- Pick the 8 standard stylus points
- Pick additional 100 to 200 head points, so that the selected points cover the entire head
Edit the digitizer label file (DigitizeLabel.txt) which is used in "Third-party export" so that it defines the 8 points and the additional points.
Note that the number of additional labels should be >= the number of digitized points.
- As for the new label names, any names can be used if the names are not duplicated.
- The following is an example:
fidt9 HPI_1 HPI_4 HPI_3 HPI_5 HPI_2 fidt10 fidnz ETC001 ETC002 ETC003 ... ETC[nnn]
Where [nnn]+8 should be >= the number of digitized points.
- Use the modified digitizer label file with the digitizer data in the "Third-party export" function.
Export the recordings
In Meg160, select the menu: File > Import and Export > Third-Party Export
- On some systems, this menu is name [BESA Binary Export]
- An operation panel for data export shows up
- When using digitizer data:
- Select [Digitizer]
Enter the digitizer file in the [Point Filename] box
(.txt file generally available under the corresponding "Scan" folder)Enter the label file in the [Label Filename] box
(DigitizeLabel.txt generally located in the "C:\Meg160\AppInfo" folder)
- When using surface point data instead:
- Select [Surface Point File]
- Enter the surface point file (.sfp) created previously.
- Enter an output file name in [Third-party Export Dataset]
- Click on [Create Export File]
- Use this exported file in Brainstorm
Alternative
Some older versions of the Yokogawa/KIT software do not inlcude this export menu "Third-party export". In this case, you can pass the digitizer information to Brainstorm with three additional files, saved in the same folder as the .sqd file containing the MEG recordings you want to process. The folder must contain one file only for each type of information, therefore you cannot save multiple runs or subjects in the same folder, you must create one subfolder per acquisition run.
*_Marker1_* or *.mrk: File with extension .mrk or .sqd with the HPI coils in MEG device coordinates
*_Points.txt or *.elp: Polhemus file with the fiducials and HPI coils in digitizer coordinates.
*_HS.txt or *.hsp: Polhemus file with the head shape points in digitizer coordinates.
Download and installation
- Requirements: You have already followed all the introduction tutorials and you have a working copy of Brainstorm installed on your computer.
Go to the Download page of this website, and download the file: sample_yokogawa.zip
Unzip it in a folder that is not in any of the Brainstorm folders (program or database folder)
- Start Brainstorm (Matlab scripts or stand-alone version)
Select the menu File > Create new protocol. Name it "TutorialYokogawa" and select the options:
"No, use individual anatomy",
"No, use one channel file per condition".
Import the anatomy
Right-click on the TutorialYokogawa folder > New subject > Subject01
- Leave the default options you set for the protocol
Right-click on the subject node > Import anatomy folder:
Set the file format: "FreeSurfer folder"
Select the folder: sample_yokogawa/anatomy
- Number of vertices of the cortex surface: 15000 (default value)
Click on the link "Click here to compute MNI transformation".
- Set the 6 required fiducial points (indicated in MRI coordinates):
- NAS: x=128, y=227, z=93
- LPA: x=48, y=130, z=69
- RPA: x=214, y=130, z=76
At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, which will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.
Access the recordings
Link the recordings
- Switch to the "functional data" view, the middle button in the toolbar above the database explorer.
Right-click on the subject folder > Review raw file:
- Select the file format: "MEG/EEG : Yokogawa/KIT"
Select the file: sample_yokogawa/data/SEF_000-export.con
Answer NO when asked to refine the registration using head points. In this dataset, we only have access to the positions of the electrodes and three additional markers on the forehead. The automatic registration doesn't work well in this case, we are going to fix this registration manually.
- A figure is opened to show the current registration MRI/MEG. It is already quite good, but can be improved a bit manually. Close this figure.
- The new file "Link to raw file" lets you access directly the contents of the MEG/EEG recordings
- The channel file "KIT channels" contains the name of the channels and the position of the corresponding sensors.
Prepare the channel file
- The recordings contain signals coming from different types of electrodes:
- 160 MEG channels
- 12 MEG references
- 14 Trigger channels
- 41 EEG channels
- 2 EOG channels: EO1 and EO2
- 1 ECG channel: EKG+
- 1 ground of the EEG amplifier: E
- Not all the types of channels are properly identified in Brainstorm. We need to redefine this manually to get correct groups of sensors.
Right-click on the channel file > Edit channel file:
Channel EO1 (208) and EO2 (209): Change the type to EOG
Channel EKG+ (214): Change the type to ECG
Channel E (231): Change the type to MISC
- Close the figure and accept to save the modifications
Refine the MRI registration
Right-click on the channel file > MRI registration > Edit... (EEG)
- The white points are the electrodes, the green points are the additional digitized head points. To display the label of the electrodes, click on the [LABEL] button in the toolbar. To see what the other buttons in the toolbar are doing and how to use them, leave your mouse over them for a few seconds and read the description.
Now try to manipulate the position of the EEG+MEG sensors using rotations and translations only (no "resize" or individual electrodes adjustments). The objective is to have all the points close to the surface and the three forehead points inside the little peaks on the surface (due to markers in the MRI).
The rotation+translation are going to be applied both to the EEG and the MEG sensors. After you are done with this solid registration part, you can click on the button "Project electrodes on scalp surface", it will help for the source modeling later. The green points (digitized) stay in place, the white points (electrodes) are now projected on the skin of the subject.
- If you feel like you didn't do this correctly, close the figure and cancel the modifications, then try again. It takes a few trials to get used to this rotation/translation interface.
Click on [OK] when done.
Answer YES to save the modifications.
Answer YES again to apply the solid transformation (rotation+translation) to the MEG sensors.
Before manual registration:
After manual registration:
Read the stimulation information
Right-click on the "Link to raw file" > Trigger > Display time series
- In the Record tab, switch to a column view of the sensors (first button in the toolbar)
You can see that all the trigger lines are flat except for "Trigger01", which contains the information of the electric stimulation. We are going to read this trigger channel as a list of events.
In the Record tab, menu File > Read events from channel.
Event channel = Trigger01
Option selected "TTL": detect peaks of 5V/12V on an analog channel.
Do not select the option Accept zeros as trigger values
Check that the peaks of the triggers channel have been correctly identified, then close this figure.
Pre-processing
Evaluate the recordings
- Drag and drop the "Link to raw file" into the Process1 list.
Select the process "Frequency > Power spectrum density", configure it as follows:
Double-click on the new PSD file to display it.
- The lines on the top represent the EEG electrodes, the lines at the bottom the MEG sensors. If you want to get clearer plots, you can calculate separately the spectrum for the two types of sensors separately, by running twice the process "Power spectrum density" , once with sensor types = "MEG" and once with "EEG", instead of running in on both at the same time like we did.
- Observations (below 250Hz):
- Peak around 11Hz: Alpha waves from the subject's brain
- Peaks at 60Hz, 120Hz, 180Hz, 240Hz on EEG + MEG: Power lines (60Hz+harmonics)
- Peaks at 35Hz, 65Hz, 70Hz, 183Hz, 197Hz on MEG only: Noise coming from an unknown source
MEG sensor LC11 (in red) appears to have a higher level of noise than all the other MEG sensors, we will check this when review the MEG recordings and probably tag it as a bad channel.
If we review quickly the EEG and EOG signals, we quickly note that there are a lot of eye movements in these recordings. The subject is moving the eyes (blinks and slow movements), maybe because there was no fixation cross for this experiment. We will apply at least a high-pass filter to make the signals easier to process (we are not interested by very low frequencies in this experiment).
Frequency filters
- In Process1, select the "Link to raw file".
Select process Pre-process > Band-pass filter: Frequency=[0.5, 200]Hz, Sensors=MEG,EEG
Add processs Pre-process > Notch filter: Frequencies=[60 120 180]Hz, Sensors=MEG,EEG
- In Process1, select the filtered file "Raw | Band | Notch".
Run the process "Frequency > Power spectrum density", with the same options as before.
Review the recordings
MEG: Default montages
Pre-defined selections of sensors are available to help reviewing the MEG recordings.
Right-click on the filtered file "Raw|band|notch" > MEG > Display time series.
- Display the channels in columns (first button in the toolbar of the Record tab).
- Check the list of montages available for this file: click on the drop-down menu in the Record tab.
Select the montage "KIT LC"
If you don't see all the "KIT..." entries in this list, load them manually:
Click on All > Edit montages...
- Click on the [Load montage] button
- Go to the folder: brainstorm3/toolbox/sensors/private
- Select the file format "MNE selection files (*.sel)"
Select the file mne_montage_yokogawa.sel
Click on save to close the montage editor.
MEG: Bad channels
Click on the noisy LC11 sensor to select it (displayed in red)
Press the delete key or right-click in the figure > Channels > Mark selected as bad.
- Close this figure
EEG: Average reference
Right-click on the filtered file "Raw|band|notch" > EEG > Display time series.
Check the list of available EEG montages. In the Record tab, increase the length of the displayed time window to 10s. In the original raw file the channel TP9 looks suspicious, but it looks good in this filtered file.
In the Record tab, menu Artifacts > Re-reference EEG > "AVERAGE".
At the end, the window "Select active projectors" is open to show the new re-referencing projector. Just close this window. To get it back, use the menu Artifacts > Select active projectors.
Artifacts cleaning with ICA
Detect heartbeats and blinks
Right-click on the filtered file "Raw|band|notch" > EEG > Display time series.
In the Record tab, menu Artifacts > Detect heartbeat: channel EKG+
In the Record tab, menu Events > Detect eye blinks: channel EO2 (better signal than EO1)
In the Record tab, menu Events > Remove simultaneous: "cardiac" / "blink" / 250ms.
The heartbeats are correctly detected, but not the eye movements. There are not only blinks but also a lot of slow eye movements in these recordings. We will prefer an ICA approach, rather than a SSP decomposition which is more adapted to recordings with only clean blinks.
More information about ICA.
EEG: Heartbeats and eye movements
Right-click on the filtered file "Raw|band|notch" > EEG > Display time series.
In the Record tab, select the menu Artifacts > ICA components. Configure as illustrated below, make sur you do not select the option "Use existing SSP/ICA" (to ignore the average reference).
Identify the components that show topographies related with eye movements and that are associated with time series visibly correlated with the EOG signals. You should be able to identify easily a component corresponding with the heartbeats as well.
Select all these components to remove them from the signals and click on [Save].
MEG: Heartbeats and eye movements
Right-click on the filtered file "Raw|band|notch" > EEG > Display time series.
In the Record tab, select the menu Artifacts > ICA components. Configure as illustrated below. You can limit the number of output components to 20 or 40, it should be enough to extract strong components like the ocular and cardiac artifacts. You may get more accurate results with a full decomposition, but it would take much longer.
Identify the components corresponding to the eye movements and the heartbeats.
Select these components to remove them from the signals and click on [Save].
Epoching and averaging
Import the recordings
In this experiment, the electric stimulation is sent with a frequency of 2.8Hz, meaning that the inter-stimulus interval is 357ms. We are going to import epochs of 300ms around the stimulation events.
Right-click on the filtered file "Raw|band|notch" > Import in database:
Check "Use events" and select "Trigger01", Epoch time: [-50, 250] ms, Apply SSP/ICA.
No need to remove the DC offset, we have already applied a high-pass filter to the recordings.
At the end, you are asked whether you want to ignore one epoch that is shorter than the others. This happens because the acquisition of the MEG signals was stopped less than 250ms after the last stimulus trigger was sent. Therefore, the last epoch cannot have the full [-50,250]ms time definition. This shorter epoch would prevent us from averaging all the trials easily. As we already have enough repetitions in this experiment, we can just ignore it.
Answer Yes to this question to discard the last epoch.
Averaging
- Drag and drop all the Trigger01 trials to the Process1 tab.
Run the process Average > Average files: By trial group (folder average)
Review the average for the MEG.
\Similar results for the EEG.
Source estimation
Head model
We are going to use a realistic head model, this requires to calculate some additional surfaces for the subject, to represent the inner skull and outer skull surface. Go to the "Anatomy" view, right-click on the subject > Generate BEM surfaces.
Use 1922 vertices for each layer (default).
Go back to the "Functional data" view, right-click on the channel file > Compute head model.
Leave all the default options for the head model (cortex surface, MEG=Overlapping, EEG=OpenMEEG). Then leave all the OpenMEEG options to their defaults except for one: select the option "Use adjoint formulation".
- If the automatic download doesn't work, download OpenMEEG and install it manually (menu Help).
If the OpenMEEG calculation crashes, please refer to the OpenMEEG tutorial.
If you cannot get OpenMEEG to work, or if the results definitely do not make sense, try using a different forward model: "3-shell sphere". It's a spherical model, so it would perform better in the regions of the head that are close to the sphere. See the Head model tutorial.
Noise covariance matrix
- We will use the baseline of the single epochs to calculate the noise covariance matrix.
Right-click on the Trigger01 epochs group > Noise covariance > Compute from recordings.
Enter the same baseline interval we used for removing the DC offset: [-50, -10] ms
Inverse model
Right-click on the head model > Compute sources.
Select wMNE and MEG. When asked, leave the default list of bad channels (31: channel LC11)
Repeat the same operation for EEG. It is better to study separately the two modalities because the method for combining MEG and EEG are not working well yet.
This operation creates a shared inversion kernel and one source link for each block of recordings in the folder. If you are not familiar with those concepts, please refer to the Source estimation tutorial.
Display the sources for the MEG (top) and the EEG (bottom).
Regions of interest
- Create two scouts S1 and S2 to represent the primary and secondary somatosensory cortex of the left hemisphere.
- Open the source file for the MEG average and the corresponding recordings.
- Go to 19ms, adapt the amplitude threshold and the colormap to see only a focal source.
- Create a scout at the center of the activated region.
- Grow the scout to about 20 vertices and rename it to "S1".
- Go to 50ms and repeat the same operation for S2. In this specific case, S2 does not appear a clear independent region, as we still see stronger activities in more superior regions of the brain. So pick one activated region around the anatomical location of the secondary somatosensory cortex (at the very bottom of the post-central gyrus). See the picture below.
For more information on the scouts, please refer to the scouts tutorial.
Then plot the activity for the different files we calculated.
Scouts time series for the MEG (left) and the EEG (rights).
Scripting
The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: brainstorm3/toolbox/script/tutorial_yokogawa.m