26850
Comment:
|
26434
|
Deletions are marked like this. | Additions are marked like this. |
Line 21: | Line 21: |
'''Yokogawa/KIT data''' |
|
Line 25: | Line 23: |
If your software does not support the functions used below, please contact Yokogawa via http://www.yokogawa.com/me/index.htm '''Preparation''' If a wave data file and the corresponding digitizer file are ready for use, there is no additional operation required. Please go to “3. Data export”. If no digitizer file is available, the following operation is required: Extraction of Head Surface Points Select the [Surface Point File] menu from [BESA Text Export] under [Import and Export] in the [File] menu. Check that the fiducial points are properly pointed, and then click the [OK] button. A surface point file (“.sfp”), which includes position data of fiducial points, marker points, and other points describing the head shape, is automatically created. '''Data export''' Select the [Third-Party Export] menu under [Import and Export] in the [File] menu, and then an operation panel for data export will show up (*1, *2). The next procedure depends on the availability of the digitizer data: '''When using digitizer data''': In the operation panel, select [Digitizer] and provide the digitizer file and the label file in the [Point Filename] box and the [Label Filename] box, respectively. The point file is a “.txt” file generally available under the corresponding “Scan” folder, and the label file is “!DigitizeLabel.txt” generally located in the “C:\Meg160\!AppInfo” folder. Finally, provide an output file name in [Third-party Export Dataset], and then push the [Create Export File] button. Use this exported file in Brainstorm. '''When using surface point data instead''': In the operation panel, select [Surface Point File] and provide the surface point file (“.sfp”) created in “Extraction of Head Surface Points”. Finally, provide an output file name in [Third-party Export Dataset], and then push the [Create Export File] button. Use this exported file in Brainstorm. (*1): . (*2): In some systems, the function name reads [BESA Binary Export] instead of [Third-party Export]. |
The dataset used in this tutorial has already been exported using this procedure. It is described here so that later you can export your own recordings to Brainstorm. If your software does not support the functions used below, please contact Yokogawa via<<BR>> http://www.yokogawa.com/me/index.htm '''Export the digitizer file''' * If a wave data file and the corresponding digitizer file are ready for use, there is no additional operation required. * If no digitizer file is available, you need first to extract the head surface points: * In Meg160, select the menu: File > Import and Export > BESA Text Export > Surface Point File * Check that the fiducial points are properly pointed, and then click the [OK] button * A surface point file (.sfp) is automatically created. It includes the position data of: the fiducial points, the marker points and other points describing the head shape '''Export the recordings''' * In Meg160, select the menu: File > Import and Export > Third-Party Export * On some systems, this menu is name [BESA Binary Export] * An operation panel for data export shows up * When using digitizer data: * Select [Digitizer] * Enter the digitizer file in the [Point Filename] box<<BR>>(.txt file generally available under the corresponding "Scan" folder) * Enter the label file in the [Label Filename] box<<BR>>(!DigitizeLabel.txt generally located in the "C:\Meg160\!AppInfo" folder) * When using surface point data instead: * Select [Surface Point File] * Enter the surface point file (.sfp) created previously. * Enter an output file name in [Third-party Export Dataset] * Click on [Create Export File] * Use this exported file in Brainstorm |
Line 61: | Line 61: |
* Right-click on the !TutorialEpilepsy folder > New subject > '''Subject01''' | * Right-click on the !TutorialYokogawa folder > New subject > '''Subject01''' |
Line 65: | Line 65: |
* Select the folder: '''sample_epilepsy/anatomy''' | * Select the folder: '''sample_yokogawa/anatomy''' |
Line 68: | Line 68: |
* NAS: x=135, y=222, z=75 * LPA: x=57, y=118, z=68 * RPA: x=204, y=119, z=76 * AC: x=131, y=145, z=110 * PC: x=130, y=119, z=111 * IH: x=128, y=134, z=170 (anywhere on the midsagittal plane) * At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.<<BR>><<BR>> {{attachment:anatomy.gif||height="257",width="442"}} |
* NAS: x=128, y=228, z=93 * LPA: x=48, y=129, z=69 * RPA: x=214, y=129, z=76 * AC: x=128, y=148, z=116 * PC: x=128, y=123, z=117 * IH: x=128, y=131, z=176 (anywhere on the midsagittal plane) * At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.<<BR>><<BR>> SCREEN CAPTURE |
Line 84: | Line 84: |
* Select the file: '''sample_epilepsy/data/tutorial_eeg.bin''' | * Select the file: '''sample_yokogawa/data/tutorial_eeg.bin''' |
Line 91: | Line 91: |
* Select the file: '''sample_epilepsy/data/tutorial_electrodes.elc''' | * Select the file: '''sample_yokogawa/data/tutorial_electrodes.elc''' |
Line 182: | Line 182: |
== Source analysis: Cortically constrained == | == Source analysis == |
Yokogawa/KIT tutorial
Authors: Francois Tadel, Ei-ichi Okumura, Takashi Asakawa.
This tutorial introduces some concepts that are specific to the management of MEG/EEG files recorded with Yokogawa/KIT systems in the Brainstorm environment.
Contents
License
This tutorial dataset (MEG/EEG and MRI data) remains proprietary of Yokogawa Electric Corporation, Japan. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from Yokogawa Electric Corporation.
Description of the experiment
This tutorial is based on a simple median nerve stimulation experiment:
- Unilateral median nerves were percutaneously stimulated using monophasic square-wave impulses with a duration of 0.3 ms at 2.8 Hz.
- The stimulus intensity was set at the motor threshold to evoke mild twitches of the thumb.
- The stimulus onsets were recorded as low-to-high TTL with a trigger channel labeled as "Trigger01".
- The total number of stimuli in the dataset was 336.
- The data was recorded with a Yokogawa 160 axial gradiometer system at Yokogawa Electric Corporation, Kanazawa, Japan.
Export recordings from Meg160
To import Yokogawa/KIT data files (.con, .raw, .ave) into Brainstorm, a data export process is required beforehand. The data export function is available in Meg160, which is data analysis software equipped in most of Yokogawa/KIT systems.
The dataset used in this tutorial has already been exported using this procedure. It is described here so that later you can export your own recordings to Brainstorm.
If your software does not support the functions used below, please contact Yokogawa via
http://www.yokogawa.com/me/index.htm
Export the digitizer file
- If a wave data file and the corresponding digitizer file are ready for use, there is no additional operation required.
- If no digitizer file is available, you need first to extract the head surface points:
In Meg160, select the menu: File > Import and Export > BESA Text Export > Surface Point File
- Check that the fiducial points are properly pointed, and then click the [OK] button
- A surface point file (.sfp) is automatically created. It includes the position data of: the fiducial points, the marker points and other points describing the head shape
Export the recordings
In Meg160, select the menu: File > Import and Export > Third-Party Export
- On some systems, this menu is name [BESA Binary Export]
- An operation panel for data export shows up
- When using digitizer data:
- Select [Digitizer]
Enter the digitizer file in the [Point Filename] box
(.txt file generally available under the corresponding "Scan" folder)Enter the label file in the [Label Filename] box
(DigitizeLabel.txt generally located in the "C:\Meg160\AppInfo" folder)
- When using surface point data instead:
- Select [Surface Point File]
- Enter the surface point file (.sfp) created previously.
- Enter an output file name in [Third-party Export Dataset]
- Click on [Create Export File]
- Use this exported file in Brainstorm
Download and installation
- Requirements: You have already followed all the introduction tutorials and you have a working copy of Brainstorm installed on your computer.
Go to the Download page of this website, and download the file: sample_yokogawa.zip
Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder)
- Start Brainstorm (Matlab scripts or stand-alone version)
Select the menu File > Create new protocol. Name it "TutorialYokogawa" and select the options:
"No, use individual anatomy",
"No, use one channel file per condition".
Import the anatomy
Right-click on the TutorialYokogawa folder > New subject > Subject01
- Leave the default options you set for the protocol
Right-click on the subject node > Import anatomy folder:
Set the file format: "FreeSurfer folder"
Select the folder: sample_yokogawa/anatomy
- Number of vertices of the cortex surface: 15000 (default value)
- Set the 6 required fiducial points (indicated in MRI coordinates):
- NAS: x=128, y=228, z=93
- LPA: x=48, y=129, z=69
- RPA: x=214, y=129, z=76
- AC: x=128, y=148, z=116
- PC: x=128, y=123, z=117
- IH: x=128, y=131, z=176 (anywhere on the midsagittal plane)
At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.
SCREEN CAPTURE
Without the individual MRI
If you do not have access to an individual MR scan of the subject (or if its quality is too low to be processed with FreeSurfer), but if you have digitized the head shape of the subject using a tracking system, you have an alternative: deform one of the Brainstorm templates (Colin27 or ICBM152) to match the shape of the subject's head.
For more information, read the following tutorial: Warping default anatomy
Access the recordings
Link the recordings
- Switch to the "functional data" view.
Right-click on the subject folder > Review raw file:
- Select the file format: "EEG: Deltamed Coherence-Neurofile(*.bin)"
Select the file: sample_yokogawa/data/tutorial_eeg.bin
- The new file "Link to raw file" lets you access directly the contents of the EEG recordings
- The channel file "Deltamed channels" in the (Common files) folder contains the name of the channels, but not their positions. We need to overwrite this file and import manually the positions of the electrodes (either a standard cap or accurate positions digitized with a Polhemus device).
Prepare the channel file
Right-click on the subject folder > Import channel file:
- Select the file format: "EEG: ANT Xensor (*.elc)"
Select the file: sample_yokogawa/data/tutorial_electrodes.elc
- Confirm that you want to overwrite the existing channel file.
- This file contains the default electrodes positions from the ASA software (ANT)
- The recordings contain signals coming from different types of electrodes:
- 29 EEG electrodes
- EOG1, EOG2: Electrooculograms
- EMG, ECG: Electromyogram and electrocardiogram
- SP1, SP2: Sphenoidal electrodes
- RS: Electrode on the right shoulder
- PHO: Photo stimulation channel
- DELR, DELL, QR, QL: Additional
- The file format for the electrodes positions does not describe the type of the channels correctly, therefore all the signals saved in the files are classified as EEG. We need to redefine this manually to get correct groups of sensors, we want to have only real EEG electrodes in the "EEG" category and put everything that we are not going to use a "MISC" category.
Right-click on the channel file > Edit channel file:
- Note that the EOG, EMG and ECG channels already have a different type
- Select all the other non-EEG channels: SP1, SP2, RS, PHO, DELR, DELL, QR, QL
Right-click in the window > Set channel type: type MISC
- Close the figure and accept to save the modifications
Register electrodes with MRI
- The channel file we imported contains generic electrodes positions; hence, it cannot be fully aligned with the head surface coming from the MRI. We need to register manually these electrodes positions with the subject anatomy.
Right-click on the channel file > MRI registration > Edit... You will see several buttons across the toolbar; to get the function of a button, hover your mouse for a few seconds over it to reveal the tooltip.
- You can click on the "Label" button in the toolbar to show the electrode names. The default positions are already quite good, and the head shape is correct; only limited manual registration will be required in the next few steps.
- Click on the button "Refine registration using head points" to find a better registration between the head shape defined by the electrodes and the head surface coming from the MRI.
- Click on the button "Project electrodes on scalp surface", to ensure all the electrodes touch the skin surface.
Review EEG recordings
Import the spike markers
Some spikes were marked by the epileptologists at the Epilepsy Center in Freiburg with Brainstorm and saved in an external text file. We are going to import this file manually.
Right-click on the "Link to raw file" > EEG > Display time series.
In the tab Record, menu File > Add events from files...
- Select file format "Array of times" (We are importinga a text file containing the timing of the markers)
Select file sample_epilepsy/data/tutorial_spikes.txt
When prompted, enter the event name "SPIKE"
- Note that a new category SPIKE is visible in the events list, containing 58 markers
- Close the viewer and agree to save the modifications
Pre-process recordings
Evaluation
Two of the typical pre-processing steps consist in getting rid of the contamination due to the power lines (50 Hz or 60Hz) and of the frequencies we are not interested in (a low-pass filter to remove the high-frequencies and a high-pass filter to remove the very slow components of the signals). Let's start with the spectral evaluation of this file.
Drag the "Link to raw file" to the Process1 box and run the process "Frequency > Power spectrum density (Welch)". Configure it as illustrated in the following figure (window length=10s, overlap=50%).
Double-click on the new PSD file to display it.
The sharp drop we observe at 50Hz on all the electrodes indicates that a notch filter was already applied to the data, we do not need to remove the power line contamination again. If we had to, we would run the process "Pre-processing > Sinusoid removal (notch)" as explained in the tutorial ?Detect and remove artifacts.
High-pass filter
We would like to apply a 0.5Hz high-pass filter to the entire file in order to get rid of the long amplitude fluctuations (longer than 2s). With some MEG file formats, Brainstorm supports the creation of new filtered files in native formats, which means that you can drag and grop directly the "Link to raw file" to the Process1 tab and run the band-pass filter process. Unfortunately, this is currently not supported for the file format we are currently working on (Deltamed Coherence-Neurofile).
There is an alternative option which consists in importing the entire continuous file in the database in one single block, then run the filter on it. This approach works is possible only on rather small files (4 * file size < memory on the computer). Here the file size is less than 80 Mb, so it can be easily processed this way.
Right-click on the "Link to raw file" > Import in database.
Uncheck all the options and leave the default "Time window" option.
- Click on Import, it creates an integral copy of the file in the database. Do NOT try to review it: for imported file, the partial loading and the scrolling in time are disabled and it would take a lot of time to display the waveforms for the entire file. It could event result in a crash of Brainstorm or Matlab.
- Drag this new file "Raw (0s,3600s)" to the Process1
Run the process "Pre-process > Band-pass filter", with the options:
Epoching and averaging
Import recordings
- Now we want to extract all the spikes as 400ms windows from this filtered file.
Right-click on the imported and filtered file "Raw (0s,3600s) | high(0.5Hz)" > Import in database
Select the option "Use events" and select the category SPIKE, with 58 events.
Epoch time: [-100, +300]ms around the spike events
Uncheck "Remove DC offset": the DC offset has already been removed with the high-pass filter
You should see a new folder "SPIKE" containing 58 epochs in your database.
Average spikes
- Drag and drop all the SPIKE files or the SPIKE folder into the Process1 tab
Run the process "Average > Average files", with the following options:
- Double click on the new file "Avg: SPIKE" to review it, and set the "Average reference" montage.
Go to the Filter tab to disable all the visualization filters: those filters are not working well on short signals, the high-pass filter at 0.5Hz can introduce errors in the display of the average.
Open a "2D sensor cap" view on the same file, or simply press [Ctrl+T] from the time series figure.
- To have all the figures re-arrange themselves automatically again, select "Weighted" or "Tiled" in the "Window layout" menu (top-right corner).
- Explore this average in time with the left and right keyboard arrow.
Typically, the analysis could be limited to the ascending part of the peak (from -11ms to 0ms), as it gives a rather clear information on the primary epilepsy focus. For the rest of the tutorial, we will use a larger time window (-40ms to 100ms) in order to illustrate the visualization tools. Just keep in mind that it can be hazardous to interpret waves that are far in time or in space from the primary focus.
- To get a synthetic view at the sensor level of the evolution of the 2D map in time:
Source analysis
Head model
We are going to use a realistic head model, this requires to calculate some additional surfaces for the subject, to represent the inner skull and outer skull surface. Go to the "Anatomy" view, right-click on the subject > Generate BEM surfaces.
Use 2000 vertices for each layer. This value will be replaced with the closest possible value (1922). Three new surface files are created at the end of the process.
Go back to the "Functional data" view, right-click on the channel file (or any folder containing the channel file) > Compute head model
Select the "cortex surface option" and the "OpenMEEG BEM" forward model. Leave all the OpenMEEG options to their defaults except for one: select the option "Use adjoint formulation".
- If you can't download OpenMEEG automatically, download and install it manually (menu Help)
If the OpenMEEG calculation crashes, please refer to the OpenMEEG tutorial.
- If you can compute the model but then later the source maps you get don't make any sense, it is probably because the BEM surfaces were poorly defined. When some cortex vertices are too close to the inner skull surface, the model can be unstable and after the source estimation you would typically see source maps containing zeros everywhere except for a few vertices. If this happens, delete the existing BEM surfaces and try computing new ones with a higher number of vertices for the inner and outer skull BEM surfaces.
If you cannot get OpenMEEG to work, or if the results definitely do not make sense, try using a different forward model: "3-shell sphere". It's a spherical model, so it would perform better in the regions of the head that are close to the sphere. See the ?Head model tutorial.
Noise covariance matrix
For minimum norm inverse models, we need to estimate first the level of noise at the level of the sensors. Defining what can be considered as "noise" in the middle of continuous brain recordings is a difficult question, discussed in the ?Noise covariance tutorial.
- Here we chose what is described as option 1b in the section "Noise and epilepsy" in that tutorial: we are going to calculate the noise covariance matrix using 10 seconds from the continuous file that are very "quiet", ie. that contain no apparent epileptic activity, no REM and no other special artifacts. We can use for this purpose the 10s segment between 120s and 130s.
- The file you use for the calculation of the noise covariance matrix must be pre-processed in the same way as the files for which you want to estimate the sources (the SPIKE epochs), so we need to use the imported+filtered continuous file.
Right-click on the "Raw (0s,3600s) | high(0.5Hz)" > Noise covariance > Compute from recordings
Select the time window [120s - 130s] and leave the other options to the default values
Inverse model
Right-click on the head model > Compute sources.
Leave all the default options to calculate a wMNE solution with constrained dipole orientation
This operation creates a shared inversion kernel in the folder (Common files) and one source link for each block of recordings in subject Subject01. If you are not familiar with those concepts, please refer to the ?Source estimation tutorial.
Double-click on the sources file for the average to open it. Configure its display: surface smoothing, amplitude threshold, colormap. For a reminder on how to manipulate the colormaps, see the tutorial Explore recordings.
- Make sure that all the visualization filters are turned OFF.
Double-click on the recordings as well to get a visual reference of the time. In the Record tab, you can click on the "Display mode" button in the toolbar to switch back to the "butterfly" view.
To display the same information re-interpolated in the volume, right-click on the source file > Cortical activation > Display on MRI
You can save movies or contact sheets of the source maps with the Snapshot popup menu
Example: Video made with three figures: Download
- The inverse solutions that you calculated are all shared for all the recordings in this subject, so you can go back to your individual epochs, or even the continuous file, and look at the source maps for those files.
Regions of interest
- The scouts are a very interesting feature for studying the spike spatial propagation. From the displays we generated previously, there is no clear propagation of the spike. Let's try to explore this in more details, at least to illustrate the method.
- Start by placing a small scout at the focus (~7 vertices)
- Place a few other scouts of a similar size at other places around the spike location, in places at which you could suspect a possible propagation.
- Plot the overlayed time series for those scouts. Try with the scout functions MEAN and MAX.
- There is no delay in the peaks for the different regions at -20ms, 0ms and 35ms This temporal exploration seems to confirm that there is no clear spatial propagation of the spike.
If you need a reminder on how to manipulate scouts: see the ?Scouts tutorial
Z-score normalization
A good way to reveal better the source activity at the cortex level is to calculate a Z-score of the source maps with respect with a quiet baseline. We can use the same baseline as for the calculation of the noise covariance matrix.
Import the block a recordings to use as the baseline. Right-click on the imported and processed file "Raw (0s,3600s) | high(0.5Hz)" > Import in database. Uncheck all the options and select the time window: [120, 130] secondes.
- Click on the Process2 tab
- In Files A: Drag the sources for the segment you just imported "Raw (120.00s,130.00s)"
- In Files B: Drag the sources for the averaged spike
Alternatively, select the recordings and manually select the "Process sources" button on both sides
Run the process: Standardize > Z-score (static). Double-click on the new file to display it
- The amplitude in the figure is now displayed without units, it represents a multiple of the standard deviation over the baseline. The interpretation of this new figure is not very different from the previous one, but it helps observing the things with a different angle. This process usually provides smoother maps, with less bias due to the depth of the sources. On the non-standardized minimum norm maps, the values of the deeper sources are usually lower than the ones at the surface. The Z-score helps showing deeper sources that have minimum norm values that are low, but higher than during the baseline.
Scripting
To reproduce this entire analysis with a script, use the following processes:
Import recordings > Import anatomy folder
Subject name: Subject01
Folder to import: sample_epilepsy/anatomy (format = FreeSurfer)
- Enter the fiducials positions indicated at the beginning of the tutorial
- Because of the selection of those points, it usally to run this first step in interactive mode...MISC
Import anatomy > Generate BEM surfaces
- Set all the values to 1922 and the skull thickness to 4mm
Import recordings > Create link to raw file
File to import: sample_epilepsy/data/tutorial_eeg.bin (format = Deltamed Coherence-Neurofile)
- Uncheck the option "Align sensors using headpoints"
Import recordings > Set channel file
File to import: sample_epilepsy/data/tutorial_electrodes.elc (format = EEG:ANT Xensor)
- Check the option "Align sensors using headpoints"
Import recordings > Project electrodes on scalp
Import recordings > Set channels type
Channel types or names: SP1, SP2, RS, PHO, DELR, DELL, QR, QL
Import recordings > Events: Import from file
Event file: sample_epilepsy/data/tutorial_spikes.txt: (format = Array of times)
Event name: SPIKE
Import recordings > Import MEG/EEG: Time
- Condition name: Empty
- Time window: default (entire file)
- Split recordings in time blocks: 0s
Pre-process > Band-pass filter:
Lower cutoff frequency: 0.5 Hz
- Upper cutoff frequency: 0
- Sensor types: Empty
Check: Overwrite input file
Sources > Compute noise covariance
Baseline: [120s - 130s]
- Remove DC offset: Block by block
- Ouptut: Full noise covariance matrix
Import recordings > Import MEG/EEG: Events
- Condition name: Empty
Event names: SPIKE
- Time window: default (entire file)
Epoch time: [-101.6s, 300.8s]
- Check all the additional options
Average > Average files
- By condition (subject average)
- Arithmetic average
- Uncheck: Keep all the events
File > Save snapshot
- Snapshot: Sensors/MRI registration
- Sensor type: EEG
- Orientation: Left
- Comment: "Average spike"
File > Save snapshot
- Snapshot: Recordings time series
- Sensor type: EEG
File > Save snapshot
- Snapshot: Recordings topography (contact sheet)
- Sensor type: EEG
- Contact sheet (start time, stop time): "-40 110"
- Contact sheet (number of images): 16
Sources > Compute head model
- EEG Method: OpenMEEG BEM
- Edit OpenMEEG options
- BEM Layers: Select all (scalp, skull, brain)
- Check: Use adjoint formulation
- Check: Use adaptive integration
Sources > Compute sources
- wMNE
- Sensor types: EEG
- Kernel onle: Shared
File > Save snapshot
- Snapshot: Sources (one time)
- Orientation: top
- Time: 0s
Graphic edition
Generate Matlab script
This list of processes can be converted automatically to a Matlab script.
See the results on the page: ?EEG and epilepsy: Script.
Feedback