9049
Comment:
|
12008
|
Deletions are marked like this. | Additions are marked like this. |
Line 6: | Line 6: |
This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on those triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis. | This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on these triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis. |
Line 11: | Line 11: |
Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides a rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database, that we will refer to as "epochs" or "trials". | Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database. We will refer to these as "epochs" or "trials". |
Line 14: | Line 14: |
* Set the import options as described below: <<BR>><<BR>> {{attachment:import_options.gif||height="366",width="587"}} |
* Set the import options as described below: <<BR>><<BR>> {{attachment:import_options.gif||height="366",width="587"}} |
Line 17: | Line 16: |
* '''Time window''': Time range of interest. We are interested by all the stimulations, so do not change this parameter, the default values always represent the entire file. | * '''Time window''': Time range of interest. We are interested by all the stimulations, so do not change this parameter. The default values always represent the entire file. |
Line 26: | Line 25: |
* Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can replace efficiently this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. | * Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can efficiently replace this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. |
Line 32: | Line 31: |
One new folder appear in Subject01. It contains a channel file and two trial groups. | One new folder appears in Subject01. It contains a channel file and two trial groups. |
Line 36: | Line 35: |
* The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. Those epochs are clean from eye blinks and power line contamination. | * The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. These epochs are clean from eye blinks and power line contamination. |
Line 46: | Line 45: |
To mark a trial as bad manually, you have three methods: | To manually tag a trial as bad, you have three options: |
Line 48: | Line 47: |
* Right-click on the trial file in the database > '''Reject trial''' * Right-click on the figure > '''Reject trial''' * Use the keyboard shortcut '''Ctrl+B''' |
* Right-click on the trial file in the database > '''Reject trial'''. * Right-click on the figure > '''Reject trial'''. * Use the keyboard shortcut '''Ctrl+B'''. |
Line 52: | Line 51: |
== Raster plot == You can also get an overview of the values of one specific sensor over all the trials at once. * Right-click on the group of trials "deviant" > '''Display as image''' > MEG. * You can change the selected sensor with drop-down menu in the Display tab, or use the up/down arrows on your keyboard after clicking on the figure. * The bad trials are already marked, but if they were not this view could help you identify them easily. <<BR>><<BR>> {{attachment:erpimage.gif||height="205",width="661"}} |
|
Line 61: | Line 67: |
== Epoch length [TODO] == The length of the epochs you should consider depends on: * '''The exprimental design''': The minimum duration between two stimuli defines the maximum length<<BR>> you can consider analyzing after the stimulus. This should be long enough to includes the entire evoked response brain networks you are studying * '''The processing pipeline''': In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis would run on most computers, including personal laptops. For any type of frequency analysis on the recordings, this will be '''too short'''. When processing your own recordings, you should probably increase the size of the epochs beyond the segment that you are actually planning to study. Example of processes that will require longer epochs: * '''Low-pass filtering''' '''[TODO]''': <<BR>>When filtering below ''f'' Hz, you may observe edge effects during at least one period (T=1/f) at the beginning and the end of the epoch. For example, if you low-pass filter an epoch at 30Hz, you will have to throw away the first and last 33ms of the epoch. You should consider this when you import the data. High-pass filters should only be applied on the continuous recordings. * '''Time-frequency decompositions''': <<BR>>Similar problem: at 2Hz, you will get a poor estimation of the power during the first and last 500ms of your time-frequency maps (1/2Hz=0.5s). * '''Normalizations '''that use a baseline from the same epoch (Z-score, ERS/ERD, baseline correction...)<<BR>>The longer the clean baseline, the better the estimation of the average and standard deviation over this baseline. If your baseline is too short, the quality of your normalization will be low. <<BR>>If you normalize time-frequency maps or filtered source averages, you have to exclude the bad segments described before and consider an even longer baseline. <<TAG(Advanced)>> |
|
Line 66: | Line 88: |
==== Structure of the imported epochs ==== * '''F''': recordings time series (nChannels x nTime), in Volts. |
==== Structure of the imported epochs: data_*.mat ==== * '''F''': [Nchannels x Ntime] recordings time series, in Volts. |
Line 69: | Line 91: |
* '''Std''': Standard deviation or standard error, when available (see next tutorial). | * '''Std''': [Nchannels x Ntime] Standard deviation or standard error, when available (see next tutorial). |
Line 71: | Line 93: |
* '''ChannelFlag''': One value per channel, 1 means good, -1 means bad. * '''Time''': Time values for each sample recorded in F, in seconds. * '''DataType''': Type of the data saved in the F matrix. |
* '''ChannelFlag''': [Nchannels x 1] One value per channel, 1 means good, -1 means bad. * '''Time''': [1 x Ntime] Time values for each sample recorded in F, in seconds. * '''DataType''': Type of information saved in the F matrix. |
Line 79: | Line 101: |
* '''epochs''': [1 x Nevt] Indicate in which epoch the event is located (index in the sFile.epochs array), or 1 everywhere for files that are not saved in "epoched" mode. Nevt = number or occurrences of the event = number of markers in this group. * '''samples''': [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq). For extended events: [2 x Nevt], first row = start, second row = end. * '''times''': [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq). For extended events: [2 x Nevt], first row = start, second row = end. * '''reactTimes''': Not used anymore * '''select''': Not used anymore |
* '''epochs''': [1 x Nevt] Only ones for imported epochs. * '''samples''': [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq). <<BR>>For extended events: [2 x Nevt], first row = start, second row = end. * '''times''': [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq). <<BR>>For extended events: [2 x Nevt], first row = start, second row = end. * '''reactTimes''': Not used anymore. * '''select''': Not used anymore. |
Line 95: | Line 114: |
* There is no field in the file structure that says if the trial is good or bad. * This information is saved at the level of the folder, in the '''brainstormstudy.mat''' file. |
* There is no field in the file structure that says if the trial is good or bad. <<BR>>This information is saved at the level of the folder, in the '''brainstormstudy.mat''' file. |
Line 98: | Line 116: |
* Load the brainstormstudy.mat file into Matlab: <<BR>><<BR>> {{attachment:import_study.gif||height="259",width="499"}} | * Load the brainstormstudy.mat file into Matlab, the bad trials are listed in the cell array "BadTrials": <<BR>><<BR>> {{attachment:import_study.gif||height="259",width="499"}} ==== Useful functions ==== * '''in_bst_data'''(DataFile, FieldsList): Load a imported epochs in your Matlab workspace. * '''in_bst'''(FileName, TimeWindow): Load any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is an array of time values in seconds: [tStart, tStop]. * '''bst_process'''(''''LoadInputFile'''', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load. <<HTML(<!-- END-PAGE -->)>> |
Tutorial 15: Import epochs
Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet
We can consider that our datasets are clean from any major artifact. We will now proceed to the analysis of the brain signals we recorded in response to the auditory stimulation. There are two major types of processing workflows for MEG/EEG, depending on whether we are dealing with an event-related paradigm or a steady-state/resting-state study.
This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on these triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis.
Contents
Import in database
Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database. We will refer to these as "epochs" or "trials".
Right-click on Run#01 > Import in database.
Set the import options as described below:
Time window: Time range of interest. We are interested by all the stimulations, so do not change this parameter. The default values always represent the entire file.
Split: Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here.
Events selection: Check the "Use events" option, and select both "standard" and "deviant".
The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window)Epoch time: Time segment that is extracted around each event marker. Set it to [-100,+500]ms.
Apply SSP/ICA projectors: Use the active projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected.
Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected.Remove DC Offset: Check this option, select Time range: [-100, 0] ms. For each epoch, it will:
- Compute the average of each channel over the baseline (pre-stimulus interval: [-100,0]ms)
- Subtract it from the channel at every time instant (full epoch interval: [-100,+500]ms).
- This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps.
- Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can efficiently replace this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option.
Resample recordings: Keep this unchecked.
Create a separate folder for each epoch type: Do not check this option.
- If selected: a new folder is created for each event type ("standard" and "deviant")
- If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct.
One new folder appears in Subject01. It contains a channel file and two trial groups.
- The channel file is copied from the continuous file.
- To expand a group of trials and show all the files: double-click on it or click on the "+" next to it.
- The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. These epochs are clean from eye blinks and power line contamination.
Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial).
Review the individual trials
After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed).
- Double-click on the first trial for the "deviant" condition.
Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar.
Right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3.
This way you can quickly review all the trials to make sure that there is no obvious problem.
Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously.
To manually tag a trial as bad, you have three options:
Right-click on the trial file in the database > Reject trial.
Right-click on the figure > Reject trial.
Use the keyboard shortcut Ctrl+B.
To set all the trials back as good in a group: right-click on the trials group > Accept bad trials.
Raster plot
You can also get an overview of the values of one specific sensor over all the trials at once.
Right-click on the group of trials "deviant" > Display as image > MEG.
- You can change the selected sensor with drop-down menu in the Display tab, or use the up/down arrows on your keyboard after clicking on the figure.
The bad trials are already marked, but if they were not this view could help you identify them easily.
Run #02
Repeat the same operations for the second dataset:
Right-click on Run#02 > Import in database.
Import events "standard" and "deviant" with the same options.
Epoch length [TODO]
The length of the epochs you should consider depends on:
The exprimental design: The minimum duration between two stimuli defines the maximum length
you can consider analyzing after the stimulus. This should be long enough to includes the entire evoked response brain networks you are studyingThe processing pipeline:
In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis would run on most computers, including personal laptops. For any type of frequency analysis on the recordings, this will be too short. When processing your own recordings, you should probably increase the size of the epochs beyond the segment that you are actually planning to study.
Example of processes that will require longer epochs:
Low-pass filtering [TODO]:
When filtering below f Hz, you may observe edge effects during at least one period (T=1/f) at the beginning and the end of the epoch. For example, if you low-pass filter an epoch at 30Hz, you will have to throw away the first and last 33ms of the epoch. You should consider this when you import the data. High-pass filters should only be applied on the continuous recordings.Time-frequency decompositions:
Similar problem: at 2Hz, you will get a poor estimation of the power during the first and last 500ms of your time-frequency maps (1/2Hz=0.5s).Normalizations that use a baseline from the same epoch (Z-score, ERS/ERD, baseline correction...)
The longer the clean baseline, the better the estimation of the average and standard deviation over this baseline. If your baseline is too short, the quality of your normalization will be low.
If you normalize time-frequency maps or filtered source averages, you have to exclude the bad segments described before and consider an even longer baseline.
On the hard drive
Right-click on any imported epoch > File > View file contents:
Structure of the imported epochs: data_*.mat
F: [Nchannels x Ntime] recordings time series, in Volts.
Std: [Nchannels x Ntime] Standard deviation or standard error, when available (see next tutorial).
Comment: String displayed in the database explorer to represent this file.
ChannelFlag: [Nchannels x 1] One value per channel, 1 means good, -1 means bad.
Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.
DataType: Type of information saved in the F matrix.
Device: Name of the acquisition system used to record this file.
nAvg: For averaged files, number of trials that were used to compute this file.
Events: Time markers available in the file (stimulus triggers or other events)
label: Name of the event group.
color: [r,g,b] Color used to represent the event group, in Matlab format.
epochs: [1 x Nevt] Only ones for imported epochs.
samples: [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq).
For extended events: [2 x Nevt], first row = start, second row = end.times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq).
For extended events: [2 x Nevt], first row = start, second row = end.reactTimes: Not used anymore.
select: Not used anymore.
History: Operations performed on file since it was imported (menu "View file history").
File history
Right-click on any imported epoch > File > View file history:
List of bad trials
There is no field in the file structure that says if the trial is good or bad.
This information is saved at the level of the folder, in the brainstormstudy.mat file.Right-click on an imported folder > File > Show in file explorer.
Load the brainstormstudy.mat file into Matlab, the bad trials are listed in the cell array "BadTrials":
Useful functions
in_bst_data(DataFile, FieldsList): Load a imported epochs in your Matlab workspace.
in_bst(FileName, TimeWindow): Load any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is an array of time values in seconds: [tStart, tStop].
bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load.