8445
Comment:
|
8693
|
Deletions are marked like this. | Additions are marked like this. |
Line 13: | Line 13: |
* Right-click on '''Run#01''' > '''Import in database'''. <<BR>><<BR>> {{attachment:import_popup.gif}} * Set the import options as they as described below: <<BR>><<BR>> {{attachment:import_options.gif}} |
* Right-click on '''Run#01''' > '''Import in database'''. <<BR>><<BR>> {{attachment:import_popup.gif||height="152",width="298"}} * Set the import options as they as described below: <<BR>><<BR>> {{attachment:import_options.gif||height="366",width="587"}} |
Line 18: | Line 18: |
* '''Events selection''': Check the "Use events" option, and select both "left" and "right". The number in the parenthesis represents the number of occurrences of this event in the selected time window (would change if you modify the time definition on top of the figure) * '''Epoch time''': Time segment that is extracted around each marker and saved in the database. Set it to [-100, +300] ms * '''Use Signal Space Projections''': Use the active SSP projectors calculated during the previous pre-processing steps. Keep this option selected. * '''Remove DC Offset''': Check this option, and select: Time range: [-100, 0] ms. For each epoch, this will: compute the average of each channel over the baseline (pre-stimulus interval: -100ms to 0ms), and subtract it from the channel at every time instants (full epoch interval: [-100,+300]ms). This option removes the baseline value of each sensor, ie. the continuous (DC) offset that is added permanently on top of the recordings of interest. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps. Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can replace efficiently this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. |
* '''Events selection''': Check the "Use events" option, and select both "'''standard'''" and "'''deviant'''". <<BR>>The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window) * '''Epoch time''': Time segment that is extracted around each event marker. Set it to '''[-100,+500]ms'''. * '''Apply SSP/ICA projectors''': Use the active SSP projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected. <<BR>>Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected. * '''Remove DC Offset''': Check this option, select '''Time range: [-100, 0] ms'''. For each epoch, it will: * Compute the average of each channel over the baseline (pre-stimulus interval: [-100,0]ms) * Subtract it from the channel at every time instants (full epoch interval: [-100,+500]ms). * This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps. * Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can replace efficiently this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. |
Line 23: | Line 27: |
* '''Create a separate folder for each epoch type''': If selected, a new folder is created for each event type (here, it will create two folders in the database: "left" and "right"). If not selected, all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. | * '''Create a separate folder for each epoch type''': Do __not__ check this option. * If selected: a new folder is created for each event type ("standard" and "deviant") * If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want. * In this case, we have to select this second option because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct because the channel files for both runs are different. |
Line 25: | Line 32: |
Two new conditions containing two groups of trials appeared in the database. To expand a group of trials and get access to the individual epochs: double-click on it or click on the "+" next to it. | One new folder appear in Subject01. |
Line 27: | Line 34: |
The SSP projectors calculated in the previous tutorial were applied on the fly when reading from the continuous file. Those epochs are clean from eye blinks and power line contamination. Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). . {{http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory?action=AttachFile&do=get&target=import2.gif|import2.gif|class="attachment"}} |
* It contains a channel file (copied from the continuous file) and two trial groups. To expand a group of trials and access the individual epochs: double-click on it or click on the "+" next to it. <<BR>><<BR>> {{attachment:import_new_folder.gif||height="171",width="226"}} * The SSP projectors calculated in the previous tutorial were applied on the fly when reading from the continuous file. Those epochs are clean from eye blinks and power line contamination. * Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial). <<BR>><<BR>> {{attachment:import_bad.gif||height="100",width="232"}} |
Line 34: | Line 39: |
Double-click on the first trial for the "left" condition. Then right-click on the figure > Navigator > '''Next data file''', or use the keyboard shortcut '''F3 '''to jump to the next trial. This way you can quickly review all the trials, to make sure that there is no obvious problem in the recordings. If you haven't reviewed manually all the recordings in the continuous mode, and marked all the bad segments, it is a good time to do it now. | After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed). * Double-click on the first trial for the "deviant" condition. * Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar. <<BR>><<BR>> {{attachment:import_review.gif}} * Right-click on the figure > Navigator > '''Next data file''', or use the keyboard shortcut '''F3'''. <<BR>>This way you can quickly review all the trials to make sure that there is no obvious problem.<<BR>>Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously. <<BR>><<BR>> {{attachment:import_navigator.gif}} |
Line 40: | Line 49: |
* Use the keyboard shortcut '''Control+B''' * To set all the trials back as good in a group: right-click on the trials group or the condition > Accept bad trials. |
* Use the keyboard shortcut '''Ctrl+B''' * To set all the trials back as good in a group: right-click on the trials group > Accept bad trials. |
Line 43: | Line 52: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectManual.gif|rejectManual.gif|class="attachment"}} | |
Line 45: | Line 53: |
When a trial is tagged as bad, its icon in the database explorer shows a red mark. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectTree.gif|rejectTree.gif|class="attachment"}} All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs. If you drag and drop the 101 left trials in the Process1 list, with one trial marked as bad, the summary of the selected files on top of the tab would show only 100 data files. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectProcess.gif|rejectProcess.gif|class="attachment"}} == Database navigator == The Navigator menu can help you to go quickly from a dataset to another. It can be almost indispensable when your are reviewing 200 trials of the same MEG response. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutExploreRecodings?action=AttachFile&do=get&target=navigator.gif|navigator.gif|class="attachment"}} * You can access it from the popup menus of all the figures showing functional data. * You can also use the keyboard shortcuts:'' F1, F2, F3'', together with ''Shift ''key to go backwards. * Now, close all the figures (use the ''Close all figures ''button) * For ''Right / ERF'', display three views : * Time series (double click) * 2D sensor cap (Ctrl+T)''' ''' * 2D Layout (right click on ERF file > Display > 2D Layout) * Press ''F3 ''once: Updates all the figures to display ''Right/Std'' and selects it in the database explorer * Press ''F3 ''again: Nothing happens, you are already at the last dataset for this subject / condition. * Press ''Shift + F3'': And you'll be back to ''ERF ''file. * Press ''F2'': Similar to F3, but jumps from a condition to another, within the same subject. * If you had many subjects you could also use ''F1 / Shift+F1''. * MacBook users: The keys "Fx" are obtained by holding the "Fn" key simultaneously. |
|
Line 83: | Line 55: |
Repeat the same operations for the second dataset: * Right-click on '''Run#02''' > '''Import in database'''. * Import events "standard" and "deviant" with the same options. <<BR>><<BR>>{{attachment:import_run02.gif}} |
|
Line 84: | Line 61: |
* Where is the bad trial information ? * Structure of the data files |
Right-click on any imported epoch > File > View file contents: |
Line 87: | Line 63: |
* '''F''': recordings time series (nChannels x nTime), in Volts * '''Comment''': String displayed in the Brainstorm database explorer to represent this file * '''ChannelFlag''': one value per channel, 1 means good, -1 means bad (not displayed or processed) * '''Time''': Time values, in seconds * '''DataType''': Type of the data saved in the F matrix (recordings, Z-score...) * '''Device''': Name of the acquisition system used to record this file * '''nAvg''': For averaged files, number of trials that were used to compute this file |
{{attachment:import_struct.gif}} ==== Structure of the imported epochs ==== * '''F''': recordings time series (nChannels x nTime), in Volts. * '''Std''': Standard deviation or standard error, when available (see next tutorial). * '''Comment''': String displayed in the database explorer to represent this file. * '''ChannelFlag''': One value per channel, 1 means good, -1 means bad. * '''Time''': Time values for each sample recorded in F, in seconds. * '''DataType''': Type of the data saved in the F matrix. * '''Device''': Name of the acquisition system used to record this file. * '''nAvg''': For averaged files, number of trials that were used to compute this file. |
Line 95: | Line 77: |
* '''History''': Operations performed on file since it was imported * |
* '''label''': Name of the event group. * '''color''': [r,g,b] Color used to represent the event group, in Matlab format. * '''epochs''': [1 x Nevt] Indicate in which epoch the event is located (index in the sFile.epochs array), or 1 everywhere for files that are not saved in "epoched" mode. Nevt = number or occurrences of the event = number of markers in this group. * '''samples''': [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq). For extended events: [2 x Nevt], first row = start, second row = end. * '''times''': [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq). For extended events: [2 x Nevt], first row = start, second row = end. * '''reactTimes''': Not used anymore * '''select''': Not used anymore * '''History''': Operations performed on file since it was imported (menu "View file history"). |
Line 98: | Line 89: |
. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutImportRecordings?action=AttachFile&do=get&target=dataFileMat.gif|dataFileMat.gif|class="attachment"}} | ==== File history ==== Right-click on any imported epoch > File > View file history: {{attachment:import_history.gif}} ==== List of bad trials ==== * There is no field in the file structure that says if the trial is good or bad. * This information is saved at the level of the |
Tutorial 15: Import epochs
Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet
We can consider that our datasets are clean from any major artifact. We will now proceed to the analysis of the brain signals we recorded in response to the auditory stimulation. There are two major types of processing workflows for MEG/EEG, depending on whether we are dealing with an event-related paradigm or a steady-state/resting-state study.
This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on those triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis.
Import in database
Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides a rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database, called "epochs" or "trials".
Right-click on Run#01 > Import in database.
Set the import options as they as described below:
Time window: Time range of interest. We are interested by all the stimulations, so do not change this parameter; the default values always represent the entire file.
Split: Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here.
Events selection: Check the "Use events" option, and select both "standard" and "deviant".
The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window)Epoch time: Time segment that is extracted around each event marker. Set it to [-100,+500]ms.
Apply SSP/ICA projectors: Use the active SSP projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected.
Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected.Remove DC Offset: Check this option, select Time range: [-100, 0] ms. For each epoch, it will:
- Compute the average of each channel over the baseline (pre-stimulus interval: [-100,0]ms)
- Subtract it from the channel at every time instants (full epoch interval: [-100,+500]ms).
- This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps.
- Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can replace efficiently this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option.
Resample recordings: Keep this unchecked
Create a separate folder for each epoch type: Do not check this option.
- If selected: a new folder is created for each event type ("standard" and "deviant")
- If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want.
- In this case, we have to select this second option because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct because the channel files for both runs are different.
One new folder appear in Subject01.
It contains a channel file (copied from the continuous file) and two trial groups. To expand a group of trials and access the individual epochs: double-click on it or click on the "+" next to it.
- The SSP projectors calculated in the previous tutorial were applied on the fly when reading from the continuous file. Those epochs are clean from eye blinks and power line contamination.
Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial).
Review the individual trials
After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed).
- Double-click on the first trial for the "deviant" condition.
Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar.
Right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3.
This way you can quickly review all the trials to make sure that there is no obvious problem.
Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously.
To mark a trial as bad manually, you have three methods:
Right-click on the trial file in the database > Reject trial
Right-click on the figure > Reject trial
Use the keyboard shortcut Ctrl+B
To set all the trials back as good in a group: right-click on the trials group > Accept bad trials.
Run #02
Repeat the same operations for the second dataset:
Right-click on Run#02 > Import in database.
Import events "standard" and "deviant" with the same options.
On the hard drive
Right-click on any imported epoch > File > View file contents:
Structure of the imported epochs
F: recordings time series (nChannels x nTime), in Volts.
Std: Standard deviation or standard error, when available (see next tutorial).
Comment: String displayed in the database explorer to represent this file.
ChannelFlag: One value per channel, 1 means good, -1 means bad.
Time: Time values for each sample recorded in F, in seconds.
DataType: Type of the data saved in the F matrix.
Device: Name of the acquisition system used to record this file.
nAvg: For averaged files, number of trials that were used to compute this file.
Events: Time markers available in the file (stimulus triggers or other events)
label: Name of the event group.
color: [r,g,b] Color used to represent the event group, in Matlab format.
epochs: [1 x Nevt] Indicate in which epoch the event is located (index in the sFile.epochs array), or 1 everywhere for files that are not saved in "epoched" mode.
samples: [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq).
times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq).
reactTimes: Not used anymore
select: Not used anymore
History: Operations performed on file since it was imported (menu "View file history").
File history
Right-click on any imported epoch > File > View file history:
List of bad trials
- There is no field in the file structure that says if the trial is good or bad.
- This information is saved at the level of the