Size: 9689
Comment:
|
Size: 9777
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 5: | Line 5: |
= Authors: Thomas Vincent, Zhengchen Cai = The current tutorial assumes that the tutorials 1 to 5 have been performed. Even if they focus on MEG data, they introduce Brainstorm features that are used in this tutorial. |
''Authors: Thomas Vincent, Zhengchen Cai'' The current tutorial assumes that the tutorials 1 to 6 have been performed. Even if they focus on MEG data, they introduce Brainstorm features that are required for this tutorial. |
Line 17: | Line 18: |
The dataset used in this tutorial is available online . | The dataset used in this tutorial is available online. |
Line 19: | Line 20: |
* Go to the Download /!\ '''link''' page of this website, and download the file: nirs_sample.zip | * Go to the [[http://neuroimage.usc.edu/bst/download.php|Download]] page of this website, and download the file: nirs_sample.zip |
Line 23: | Line 24: |
* One subject, one acquisition run of /!\ ''' XXX ''' minutes * Finger tapping task: 10 stimulation blocks of 30 seconds each, inter-stimulus interval of /!\ ''' XXX ''' seconds |
* Finger tapping task: 10 stimulation blocks of 30 seconds each, with rest periods of ~30 seconds * One subject, one NIRS acquisition run of 12''' '''minutes at 10Hz |
Line 37: | Line 38: |
In term of sensor configuration, NIRS is very similar to EEG and the placement of optodes may change from subject to the other. (!) Should we add (EEG '''or NIRS''') in the interface? | In term of sensor configuration, NIRS is very similar to EEG and the placement of optodes may change from subject to the other. (!) Should we add (EEG '''or NIRS''') in the interface? |
Line 42: | Line 45: |
=== Import MRI === Make sure you are in the anatomy view of the protocol. Right-click on "Subject01 -> Import MRI". Select T1_MRI.nii from the NIRS_sample data folder. This will open the MRI review panel where you have to set the fudicial points (See [[http://neuroimage.usc.edu/brainstorm/Tutorials/ImportAnatomy|Import the subject anatomy]]). |
=== Import MRI and meshes === Make sure you are in the anatomy view of the protocol. |
Line 45: | Line 48: |
/!\ '''Add screenshot''' | Right-click on "Subject01 -> Import anatomy folder". Select anatomy folder from the nirs_sample data folder. Reply "yes" when asked to apply the transformation. Leave the number of vertices for the head mesh to the default value. |
Line 47: | Line 50: |
=== Import Meshes === The head and white segmentations provided in the NIRS sample data were computed with Brainvisa. |
This will open the MRI review panel where you have to set the fudicial points (See [[http://neuroimage.usc.edu/brainstorm/Tutorials/ImportAnatomy|Import the subject anatomy]]). |
Line 50: | Line 52: |
Right-click on "Subject01 -> Import surfaces". From the NIRS sample data folder, select files: head_10000V.mesh, hemi_8003V.mesh and white_8003V.mesh. | Note that the PC, AC and IH points are already defined. |
Line 52: | Line 54: |
You can check the regristration between the MRI and the loaded meshes by right-clicking on each meash element and going to "MRI registration -> Check MRI/Surface registration". | {{attachment:NIRSTORM_tut1_MRI_edit_v2.gif||height="335"}} Here are the MRI coordinates (mm) of the fudicials used to produce the above figure: * NAS: x:95 y:213 z:114 * LPA: x:31 y:126 z:88 * RPA: x:164 y:128 z:89 * AC: x:96 y:137 z:132 * PC: x:97 y:112 z:132 * IH: x:95 y:103 z:180 The head and white segmentations provided in the NIRS sample data were computed with Brainvisa and should automatically be imported and processed. You can check the registration between the MRI and the loaded meshes by right-clicking on each mesh element and going to "MRI registration -> Check MRI/Surface registration". |
Line 57: | Line 72: |
The functional data used in this tutorial was exported from the Brainsight acquisition software and is available in the NIRS sample folder in S01_Block_FO_LH_Run01.bs. This folder contains the following files: | The functional data used in this tutorial was produced by the Brainsight acquisition software and is available in the data subfolder of the nirs sample folder. It contains the following files: |
Line 59: | Line 74: |
* fiducials.txt: the coordinates of the fudicials (nasion, left ear, right ear).<<BR>>These positions should have been digitized at the same location as the fiducials previously marked on the anatomical MRI. These points will be used by Brainstorm for the registration, hence the consistency between the digitized and marked fiducials is essential for good * optodes.txt: the coordinates of the optodes (sources and detectors), in the same referential as for the fiducials. Note: the actual referential is not relevant here, as the registration will be performed by Brainstorm afterwards. * S01_Block_FO_LH_Run01.nirs: the NIRS data in a HOMer-based format /!\ '''document format'''.<<BR>>Note: The fields ''SrcPos'' and ''DetPos'' will be overwritten to match the given coordinates in "optodes.txt" |
* '''fiducials.txt''': the coordinates of the fudicials (nasion, left ear, right ear).<<BR>>The positions of the Nasion, LPA and RPA should have been digitized at the same location as the fiducials previously marked on the anatomical MRI. These points will be used by Brainstorm for the registration, hence the consistency between the digitized and marked fiducials is essential for good results. * '''optodes.txt''': the coordinates of the optodes (sources and detectors), in the same referential as in fiducials.txt. Note: the actual referential is not relevant here, as the registration will be performed by Brainstorm afterwards. * '''S01_Block_FO_LH_Run01.nirs''': the NIRS data in a HOMer-based format /!\ '''document format'''.<<BR>>Note: The fields ''SrcPos'' and ''DetPos'' will be overwritten to match the given coordinates in "optodes.txt" |
Line 66: | Line 81: |
* Right-click on "Subject01 -> Import MEG/EEG/NIRS" * Select file type "NIRS: BS (.bs)" * Load the folder "S01_Block_FO_LH_Run01.bs" in the NIRS sample folder. * Refine registration now? '''YES'''<<BR>>This operation is detailed in the next section |
* Right-click on "Subject01 -> Review raw file" * Select file type "NIRS: Brainsight (.nirs)" * Load the file "S01_Block_FO_LH_Run01.nirs" in the NIRS sample folder.<<BR>>Note: the importation process assumes that the files optodes.txt and fiducials.txt are in the same folder<<BR>>as the .nirs data file. |
Line 73: | Line 86: |
In the same way as in the tutorial "[[http://neuroimage.usc.edu/brainstorm/Tutorials/ChannelFile|Channel file / MEG-MRI coregistration]]", the registration between the MRI and the NIRS is first based on three reference points. It can then be refined with the either the full head shape of the subject or with manual adjustmen. |
In the same way as in the tutorial "[[http://neuroimage.usc.edu/brainstorm/Tutorials/ChannelFile|Channel file / MEG-MRI coregistration]]", the registration between the MRI and the NIRS is first based on three reference points Nasion, Left and Right ears. It can then be refined with the either the full head shape of the subject or with manual adjustment. |
Line 78: | Line 89: |
* The initial registration is based on the three fiducial point that define the Subject Coordinate System (SCS): nasion, left ear, right ear. You have marked these three points in the MRI viewer in the [[http://neuroimage.usc.edu/brainstorm/Tutorials/NIRSDataImport#Import_MRI |previous part]]. * These same three points have also been marked before the acquisition of the NIRS recordings. The person who recorded this subject digitized their positions with a tracking device (such as a Polhemus FastTrak or Patriot). The position of these points are saved in the NIRS datasets (see NIRS_sample/fiducials.txt). * When we bring the NIRS recordings into the Brainstorm database, we align them on the MRI using these fiducial points: we match the NAS/LPA/RPA points digitized with Brainsight with the ones we placed in the MRI Viewer. * This registration method gives approximate results. It can be good enough in some cases, but not always because of the imprecision of the measures. The tracking system is not always very precise, the points are not always easy to identify on the MRI slides, and the very definition of these points does not offer a millimeter precision. All this combined, it is easy to end with an registration error of 1cm or more. * The quality of the source analysis we will perform later is highly dependent on the quality of the registration between the sensors and the anatomy. If we start with a 1cm error, this error will be propagated everywhere in the analysis. |
* The initial registration is based on the three fiducial point that define the Subject Coordinate System (SCS): nasion, left ear, right ear. You have marked these three points in the MRI viewer in the [[http://neuroimage.usc.edu/brainstorm/Tutorials/NIRSDataImport#Import_MRI|previous part]]. * These same three points have also been marked before the acquisition of the NIRS recordings. The person who recorded this subject digitized their positions with a tracking device (here Brainsight). The position of these points are saved in the NIRS datasets (see fiducials.txt). * When we bring the NIRS recordings into the Brainstorm database, we align them on the MRI using these fiducial points: we match the NAS/LPA/RPA points digitized with Brainsight with the ones we placed in the MRI Viewer. * This registration method gives approximate results. It can be good enough in some cases, but not always because of the imprecision of the measures. The tracking system is not always very precise, the points are not always easy to identify on the MRI slides, and the very definition of these points does not offer a millimeter precision. All this combined, it is easy to end with an registration error of 1cm or more. * The quality of the source analysis we will perform later is highly dependent on the quality of the registration between the sensors and the anatomy. If we start with a 1cm error, this error will be propagated everywhere in the analysis. |
Line 86: | Line 96: |
Line 90: | Line 99: |
/!\ TODO? | |
Line 91: | Line 101: |
If the registration you get with automatic alignment techniques described previously, or if there was an issue when you digitized the position of the fiducials or the head shape, you may have to realign manually the optodes on the head. Right-click on the channel file > MRI Registration: | To review this registration, right-click on "NIRS-BRS sensors (104) -> Display sensors -> NIRS" |
Line 93: | Line 103: |
* '''Check''': Show all the possible information that may help to verify the registration. * '''Edit''': Opens a window where you can move manually the MEG helmet relative to the head.<<BR>> Read the tooltips of the buttons in the toolbar to see what is available, select an operation and then right-click+move up/down to apply it. From a scientific point of view this is not exactly a rigorous operation, but sometimes it is much better than using wrong default positions.<<BR>> IMPORTANT: this refinement can only be used to better align the headshape with the digitized points - it cannot be used to correct for a subject who is poorly positioned in the helmet (i.e. you cannot move the helmet closer to the subjects head if they were not seated that way to begin with!) |
To show the fiducials, which were stored as additional digitized head points: right-click on "NIRS-BRS sensors (104) -> Digitized head points -> View head points" |
Line 98: | Line 105: |
/!\ '''Add screenshots''' | {{attachment:NIRSTORM_tut1_display_sensors_fiducials.gif||height="355"}} /!\ TODO: improve display of optode labels and symbols As reference, the following figures show the position of fiducials [blue] (inion and nose tip are extra positions), sources [orange] and detectors [green] as they were digitized by Brainsight: {{attachment:NIRSTORM_tut1_brainsight_head_mesh_fiducials_1.gif||height="280"}} {{attachment:NIRSTORM_tut1_brainsight_head_mesh_fiducials_2.gif||height="280"}} {{attachment:NIRSTORM_tut1_brainsight_head_mesh_fiducials_3.gif||height="280"}} |
Line 103: | Line 116: |
{{attachment:func_data_loading_result.gif}} | {{attachment:NIRSTORM_tut1_func_organization.png}} |
Line 105: | Line 118: |
This indicates that the data comes from the Brainsight system (BS) and comprises 97 channels. | This indicates that the data comes from the Brainsight system (BRS) and comprises 104 channels (96 NIRS channels + 8 auxiliary signals). /!\ TODO: remove useless aux signals |
Line 108: | Line 123: |
|| ||'''Name''' ||'''Type''' ||'''Group''' ||'''Comment''' ||'''Loc(1)''' ||'''Loc(2)''' ||'''Loc(3)''' ||... || ||1 ||TAPPING ||Stim || || ||N/A ||N/A ||N/A || || ||2 ||WLs ||NIRS_WL_DEF || || ||N/A ||N/A ||N/A || || ||3 ||S1D1WL1 ||NIRS_WL1 || || ||coords S1 ||coords D1 ||coords middle [S1-D1] || || ||4 ||S1D1WL2 ||NIRS_WL2 || || ||coords S1 ||coords D1 ||coords middle [S1-D1] || || ||5 ||S1D2WL1 ||NIRS_WL1 || || ||coords S1 ||coords D2 ||coords middle [S1-D2] || || ||6 ||S1D2WL2 ||NIRS_WL2 || || ||coords S1 ||coords D2 ||coords middle [S1-D2] || || ||7 ||S1P1WL1 ||NIRS_WL1 ||NIRS_PROX || ||coords S1 ||coords D2 ||coords middle [S1-D2] || || ||8 ||S1P1WL2 ||NIRS_WL2 ||NIRS_PROX || ||coords S1 ||coords D2 ||coords middle [S1-D2] || || |
{{attachment:NIRSTORM_tut1_channel_table.gif||height="250"}} * Channels whose name are in the form SXDYWLZZZ represent NIRS measurements. For a given NIRS channel, its name is composed of the pair Source / Detector and the wavelength value. Column Loc(1) contains the coordinates of the source, Loc(2) the coordinates of the associated detector. * Each NIRS channel is assigned to the group "NIRS_WL690" or "NIRS_WL830" to specified its wavelength. * Channels AUXY in group NIRS_AUX are data read from the nirs.aux structure of the input NIRS data file. It usually contains acquisition triggers (AUX1 here) and stimulation events (AUX2 here). == Visualize NIRS signals == Select "Subject01/S01_Block_FO_LH_Run01/Link to raw file -> NIRS -> Display time series" It will open a new figure with superimposed channels {{attachment:NIRSTORM_tut1_time_series_stacked.png||height="400"}} Which can also be viewed in butterfly mode {{attachment:NIRSTORM_tut1_time_series_butterfly.png||height="400"}} To view the auxiliary data, select "Subject01/S01_Block_FO_LH_Run01/Link to raw file -> NIRS_AUX -> Display time series" We refer to the tutorial for navigating in these views [[Tutorials/ReviewRaw|"Review continuous recordings"]] |
Line 119: | Line 146: |
* The channel named "TAPPING" encodes the stimulation paradigm * The channel named "WLs" defines the set of wavelengths * Other channels contain the NIRS time-series measurements. For a given NIRS channel, its name is composed of the pair Source / Detector and the wavelength index. Column Loc(1) contains the coordinates of the source, Loc(2) the coordinates of the associated detector and Loc(3) the coordinates of the middle point between the source and the detector. * The group "NIRS_PROX" indicates that the channel is a close-source measurement. |
|
Line 124: | Line 147: |
= NIRS-MRI coregistration = Use automatic registration |
=== Montage selection === /!\ TODO |
Line 127: | Line 150: |
Keep only step 1 | Within the NIRS channel type, the following channel groups are available: |
Line 129: | Line 152: |
Display Optodes (adapted from “Display Sensors”) | * '''All Channels: '''gathers all NIRS channels * '''NIRS_WL685''': contains only the channels corresponding to the 685 nm wavelength * '''NIRS_WL830''': contains only the channels corresponding to the 830nm wavelength |
Line 131: | Line 156: |
- no helmet here -> should be able to show optode positions over head meshSKip manual registration Edit the channel file Introduce new nomenclature S1D1WL1 NIRS_WL1 …S1D1WL2 NIRS_WL2 … = Visualize NIRS signals = Depends on tut #5 Introduce default channel groups: ALL, WL1, WL2 Just adapt sections “Montage selection” and “Channel selection” |
/!\ Add screenshot |
Tutorial: Import and visualize functional NIRS data
|
|
Authors: Thomas Vincent, Zhengchen Cai
The current tutorial assumes that the tutorials 1 to 6 have been performed. Even if they focus on MEG data, they introduce Brainstorm features that are required for this tutorial.
List of prerequisites:
Download
The dataset used in this tutorial is available online.
Go to the Download page of this website, and download the file: nirs_sample.zip
- Unzip it in a folder that is not in any of the Brainstorm folders
Presentation of the experiment
- Finger tapping task: 10 stimulation blocks of 30 seconds each, with rest periods of ~30 seconds
One subject, one NIRS acquisition run of 12 minutes at 10Hz
- 4 sources and 12 detectors (+ 4 proximity channels) placed above the right motor region
- Two wavelengths: 690nm and 830nm
MRI anatomy 3T from
scanner type
Create the data structure
Create a protocol called "TutorialNIRSTORM":
Got to File -> New Protocol
- Use the following setting :
Default anatomy: Use individual anatomy.
Default channel file: Use one channel file per subject (EEG).
In term of sensor configuration, NIRS is very similar to EEG and the placement of optodes may change from subject to the other.
Should we add (EEG or NIRS) in the interface?
Create a subject called "Subject01" (Go to File -> New subject), with the default options
Import anatomy
Import MRI and meshes
Make sure you are in the anatomy view of the protocol.
Right-click on "Subject01 -> Import anatomy folder". Select anatomy folder from the nirs_sample data folder. Reply "yes" when asked to apply the transformation. Leave the number of vertices for the head mesh to the default value.
This will open the MRI review panel where you have to set the fudicial points (See Import the subject anatomy).
Note that the PC, AC and IH points are already defined.
Here are the MRI coordinates (mm) of the fudicials used to produce the above figure:
- NAS: x:95 y:213 z:114
- LPA: x:31 y:126 z:88
- RPA: x:164 y:128 z:89
- AC: x:96 y:137 z:132
- PC: x:97 y:112 z:132
- IH: x:95 y:103 z:180
The head and white segmentations provided in the NIRS sample data were computed with Brainvisa and should automatically be imported and processed.
You can check the registration between the MRI and the loaded meshes by right-clicking on each mesh element and going to "MRI registration -> Check MRI/Surface registration".
Import NIRS functional data
The functional data used in this tutorial was produced by the Brainsight acquisition software and is available in the data subfolder of the nirs sample folder. It contains the following files:
fiducials.txt: the coordinates of the fudicials (nasion, left ear, right ear).
The positions of the Nasion, LPA and RPA should have been digitized at the same location as the fiducials previously marked on the anatomical MRI. These points will be used by Brainstorm for the registration, hence the consistency between the digitized and marked fiducials is essential for good results.optodes.txt: the coordinates of the optodes (sources and detectors), in the same referential as in fiducials.txt. Note: the actual referential is not relevant here, as the registration will be performed by Brainstorm afterwards.
S01_Block_FO_LH_Run01.nirs: the NIRS data in a HOMer-based format
document format.
Note: The fields SrcPos and DetPos will be overwritten to match the given coordinates in "optodes.txt"
To import this data set in Brainstorm:
- Go to the "functional data" view of the protocol.
Right-click on "Subject01 -> Review raw file"
- Select file type "NIRS: Brainsight (.nirs)"
Load the file "S01_Block_FO_LH_Run01.nirs" in the NIRS sample folder.
Note: the importation process assumes that the files optodes.txt and fiducials.txt are in the same folder
as the .nirs data file.
Registration
In the same way as in the tutorial "Channel file / MEG-MRI coregistration", the registration between the MRI and the NIRS is first based on three reference points Nasion, Left and Right ears. It can then be refined with the either the full head shape of the subject or with manual adjustment.
Step 1: Fiducials
The initial registration is based on the three fiducial point that define the Subject Coordinate System (SCS): nasion, left ear, right ear. You have marked these three points in the MRI viewer in the previous part.
- These same three points have also been marked before the acquisition of the NIRS recordings. The person who recorded this subject digitized their positions with a tracking device (here Brainsight). The position of these points are saved in the NIRS datasets (see fiducials.txt).
- When we bring the NIRS recordings into the Brainstorm database, we align them on the MRI using these fiducial points: we match the NAS/LPA/RPA points digitized with Brainsight with the ones we placed in the MRI Viewer.
- This registration method gives approximate results. It can be good enough in some cases, but not always because of the imprecision of the measures. The tracking system is not always very precise, the points are not always easy to identify on the MRI slides, and the very definition of these points does not offer a millimeter precision. All this combined, it is easy to end with an registration error of 1cm or more.
- The quality of the source analysis we will perform later is highly dependent on the quality of the registration between the sensors and the anatomy. If we start with a 1cm error, this error will be propagated everywhere in the analysis.
Step 2: Head shape
We don't have digitized head points for this data set. We should skip this
Step 3: manual adjustment
TODO?
To review this registration, right-click on "NIRS-BRS sensors (104) -> Display sensors -> NIRS"
To show the fiducials, which were stored as additional digitized head points: right-click on "NIRS-BRS sensors (104) -> Digitized head points -> View head points"
TODO: improve display of optode labels and symbols
As reference, the following figures show the position of fiducials [blue] (inion and nose tip are extra positions), sources [orange] and detectors [green] as they were digitized by Brainsight:
Review Channel information
The resulting data organization should be:
This indicates that the data comes from the Brainsight system (BRS) and comprises 104 channels (96 NIRS channels + 8 auxiliary signals).
TODO: remove useless aux signals
To review the content of channels, right-click on "BS channels -> Edit channel file".
- Channels whose name are in the form SXDYWLZZZ represent NIRS measurements. For a given NIRS channel, its name is composed of the pair Source / Detector and the wavelength value. Column Loc(1) contains the coordinates of the source, Loc(2) the coordinates of the associated detector.
- Each NIRS channel is assigned to the group "NIRS_WL690" or "NIRS_WL830" to specified its wavelength.
- Channels AUXY in group NIRS_AUX are data read from the nirs.aux structure of the input NIRS data file. It usually contains acquisition triggers (AUX1 here) and stimulation events (AUX2 here).
Visualize NIRS signals
Select "Subject01/S01_Block_FO_LH_Run01/Link to raw file -> NIRS -> Display time series"
It will open a new figure with superimposed channels
Which can also be viewed in butterfly mode
To view the auxiliary data, select "Subject01/S01_Block_FO_LH_Run01/Link to raw file -> NIRS_AUX -> Display time series"
We refer to the tutorial for navigating in these views "Review continuous recordings"
Montage selection
TODO
Within the NIRS channel type, the following channel groups are available:
All Channels: gathers all NIRS channels
NIRS_WL685: contains only the channels corresponding to the 685 nm wavelength
NIRS_WL830: contains only the channels corresponding to the 830nm wavelength
Add screenshot