14622
Comment:
|
14470
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Tutorial 10: Epoching = | = Tutorial 15: Import epochs = |
Line 3: | Line 3: |
We can consider that our datasets are clean from any major artifact. We will now proceed to the analysis of the brain signals we recorded in response to the auditory stimulation. There are two major types of processing workflows for MEG/EEG, depending on whether we are dealing with an event-related paradigm or a steady-state/resting-state study. This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on these triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis. |
|
Line 6: | Line 10: |
= From auditory = == Epoching and averaging == |
== Import in database == Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database. We will refer to these as "epochs" or "trials". |
Line 9: | Line 13: |
=== Import recordings === To import epochs from '''Run01''': |
* Right-click on '''Run#01''' > '''Import in database'''. <<BR>><<BR>> {{attachment:import_popup.gif||width="298",height="152"}} * Set the import options as described below: <<BR>><<BR>> {{attachment:import_options.gif||width="587",height="366"}} |
Line 12: | Line 16: |
* Right-click on the "Link to raw file" > '''Import in database''' | * '''Time window''': Time range of interest. We are interested by all the stimulations, so do not change this parameter. The default values always represent the entire file. * '''Split''': Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here. * '''Events selection''': Check the "Use events" option, and select both "'''standard'''" and "'''deviant'''". <<BR>>The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window) * '''Epoch time''': Time segment that is extracted around each event marker. Set it to '''[-100,+500]ms'''. * '''Apply SSP/ICA projectors''': Use the active projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected. <<BR>>Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected. * '''Remove DC Offset''': Check this option, select '''Time range: [-100, -1.7]ms'''. For each epoch, it will: * Compute the average of each channel over the baseline (pre-stimulus interval: [-100,-1.7]ms) * Subtract it from the channel at every time instant (full epoch interval: [-100,+500]ms). * This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps. * Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can efficiently replace this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. * '''Resample recordings''': Keep this unchecked. * '''Create a separate folder for each epoch type''': Do __not__ check this option. * If selected: a new folder is created for each event type ("standard" and "deviant") * If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct. |
Line 14: | Line 31: |
* Use events: "'''standard'''" and "'''deviant'''" * Epoch time: '''[-100, +500] ms''' * Apply the existing SSP (make sure that you have 2 selected projectors) * '''Remove DC''' '''offset '''based on time window: '''[-100, 0] ms''' * '''UNCHECK''' the option "Create a separate folder for each epoch type", this way all the epochs are going to be saved in the same Run01 folder, and we will able to separate the trials from Run01 and Run02. |
One new folder appears in Subject01. It contains a channel file and two trial groups. |
Line 20: | Line 33: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory?action=AttachFile&do=get&target=import1.gif|import1.gif|height="344",width="521",class="attachment"}} * Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). {{http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory?action=AttachFile&do=get&target=import2.gif|import2.gif|class="attachment"}} Repeat the same operation for '''Run02''': * Right-click on the "Link to raw file" > '''Import in database''' * Use the same options as for the previous run. {{http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory?action=AttachFile&do=get&target=import3.gif|import3.gif|height="235",width="216",class="attachment"}} = From continuous tutorials = This tutorial fills the gap between the previous tutorial (review and clean continuous recordings), and the introduction tutorials (source analysis of evoked responses). It explains how to epoch the recordings, do some more pre-processing on the single trials, and then calculate their average. For this tutorial, we are going to use the protocol '''TutorialRaw''' created in the two previous tutorials: [[Tutorials/TutRawViewer|Review continuous recordings and edit markers]] and [[Tutorials/TutRawSsp|Artifact cleaning]]. If you have not followed these tutorials yet, please do it now. == Import in database == The raw file viewer provides a rapid access to the recordings, but most of operations cannot be performed directly on the continuous files: most of the pre-processing functions, averaging, time-frequency analysis and statistical tests can only be applied on blocks of data that are saved in the database (ie. "imported"). After reviewing the recordings and editing the event markers, you have to "import" the recordings in the database to go with further analysis. Right-click on the file with power line correction:''' Raw | notch(60Hz 120Hz 180Hz) > Import in dabase''' {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=importMenu.gif|importMenu.gif|class="attachment"}} Warning: If you right-click on the subject node > Import EEG/MEG, and select the tutorial .ds dataset, you would be able to import blocks of the continuous file in the database, but you would not have access to the modified events list or the SSP operators. Therefore, you would not import data cleaned of ocular and cardiac artifacts. The modified events list and the signal space projectors are saved only in the "Link to raw file" in the Brainstorm database, not in the initial continuous file. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=importOptions.gif|importOptions.gif|class="attachment"}} Set the import options as they are represented in this figure: * '''Time window''': Time range of interest. We are interested by all the stimulations, so do not change this parameter; the default values always represent the entire file. * '''Split''': Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here. * '''Events selection''': Check the "Use events" option, and select both "left" and "right". The number in the parenthesis represents the number of occurrences of this event in the selected time window (would change if you modify the time definition on top of the figure) * '''Epoch time''': Time segment that is extracted around each marker and saved in the database. Set it to [-100, +300] ms * '''Use Signal Space Projections''': Use the active SSP projectors calculated during the previous pre-processing steps. Keep this option selected. * '''Remove DC Offset''': Check this option, and select: Time range: [-100, 0] ms. For each epoch, this will: compute the average of each channel over the baseline (pre-stimulus interval: -100ms to 0ms), and subtract it from the channel at every time instants (full epoch interval: [-100,+300]ms). This option removes the baseline value of each sensor, ie. the continuous (DC) offset that is added permanently on top of the recordings of interest. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps. Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can replace efficiently this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. * '''Resample recordings''': Keep this unchecked * '''Create a separate folder for each epoch type''': If selected, a new folder is created for each event type (here, it will create two folders in the database: "left" and "right"). If not selected, all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. Click on Import and wait. At the end, you are asked whether you want to ignore one epoch that is shorter than the others. This happens because the acquisition of the MEG signals was stopped less than 300ms after the last stimulus trigger was sent. Therefore, the last epoch cannot have the full [-100,300]ms time definition. This shorter epoch would prevent us from averaging all the trials easily. As we already have enough repetitions in this experiment, we can just ignore it. Answer '''Yes''' to this question to discard the last epoch. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=importShortEpoch.gif|importShortEpoch.gif|class="attachment"}} Two new conditions containing two groups of trials appeared in the database. To expand a group of trials and get access to the individual epochs: double-click on it or click on the "+" next to it. The SSP projectors calculated in the previous tutorial were applied on the fly when reading from the continuous file. Those epochs are clean from eye blinks and power line contamination. All the files available in the ''(Common files)'' folder are also shared for those new folders "left" and "right". |
* The channel file is copied from the continuous file. * To expand a group of trials and show all the files: double-click on it or click on the "+" next to it. * The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. These epochs are clean from eye blinks and power line contamination. * Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial). <<BR>><<BR>> {{attachment:import_new_folder.gif||width="226",height="171"}} {{attachment:import_bad.gif||width="232",height="100"}} |
Line 68: | Line 39: |
Double-click on the first trial for the "left" condition. Then right-click on the figure > '''Next data file''', or use the keyboard shortcut '''F3 '''to jump to the next trial. This way you can quickly review all the trials, to make sure that there is no obvious problem in the recordings. If you haven't reviewed manually all the recordings in the continuous mode, and marked all the bad segments, it is a good time to do it now. | After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed). |
Line 70: | Line 41: |
To mark a trial as bad manually, you have three methods: | * Double-click on the first trial for the "deviant" condition. * Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar. <<BR>><<BR>> {{attachment:import_review.gif||width="440",height="172"}} * Right-click on the figure > Navigator > '''Next data file''', or use the keyboard shortcut '''F3'''. <<BR>>This way you can quickly review all the trials to make sure that there is no obvious problem.<<BR>>Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously. <<BR>><<BR>> {{attachment:import_navigator.gif||width="426",height="195"}} |
Line 72: | Line 45: |
* Right-click on the trial file in the database > '''Reject trial''' * Right-click on the figure > '''Reject trial''' * Use the keyboard shortcut '''Control+B''' * To set all the trials back as good in a group: right-click on the trials group or the condition > Accept bad trials. |
To manually tag a trial as bad, you have three options: |
Line 77: | Line 47: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectManual.gif|rejectManual.gif|class="attachment"}} | * Right-click on the trial file in the database > '''Reject trial'''. * Right-click on the figure > '''Reject trial'''. * Use the keyboard shortcut '''Ctrl+B'''. * To set all the trials back as good in a group: right-click on the trials group > Accept bad trials. |
Line 79: | Line 52: |
When a trial is tagged as bad, its icon in the database explorer shows a red mark. | == Raster plot == You can also get an overview of the values of one specific sensor over all the trials at once. |
Line 81: | Line 55: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectTree.gif|rejectTree.gif|class="attachment"}} | * Right-click on the group of trials "deviant" > '''Display as image''' > MEG. * You can change the selected sensor with drop-down menu in the Display tab, or use the up/down arrows on your keyboard after clicking on the figure. * The bad trials are already marked, but if they were not this view could help you identify them easily. <<BR>><<BR>> {{attachment:erpimage.gif||width="661",height="205"}} |
Line 83: | Line 59: |
All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs. If you drag and drop the 101 left trials in the Process1 list, with one trial marked as bad, the summary of the selected files on top of the tab would show only 100 data files. | == Run #02 == Repeat the same operations for the second dataset: |
Line 85: | Line 62: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectProcess.gif|rejectProcess.gif|class="attachment"}} | * Right-click on '''Run#02''' > '''Import in database'''. * Import events "standard" and "deviant" with the same options. <<BR>><<BR>> {{attachment:import_run02.gif||width="236",height="150"}} |
Line 87: | Line 65: |
To learn how to mark only individual channels as bad instead of the entire trial, please go back to the introduction tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/TutExploreRecodings|Exploring the recordings]]. | <<TAG(Advanced)>> |
Line 89: | Line 67: |
The process "'''Artifacts > Detect bad channels: peak-to-peak'''" can help you detect automatically some bad trials based on a peak-to-peak amplitude threshold. Drag and drop all the trials from the '''left''' condition in the Process1 list, and select this process. For each trial, it detects the channels that have a peak-to-peak amplitude (maximum value - minimum value) that is above a rejection threshold (too noisy) or below a rejection threshold (no signal). You can define different thresholds for different channel types. The options are: | == Epoch length == We imported epochs of 600ms (100ms baseline + 500ms post-stimulus) but did not justify this choice. <<BR>>The length of the epochs you import should be chosen very carefully. If you realize later your epochs are too short or too long, you would have to start over your analysis from this point. <<BR>> The epoch length to consider depends on: |
Line 91: | Line 70: |
* '''Time window''': Time range on which the peak-to-peak amplitude is calculated on each trial * '''Thresholds''': Rejection and detection thresholds for each sensor type. For CTF recordings, use the "MEG gradio" category, and ignore the "/cm (x 0.04)" indication, that is valid only for Neuromag systems. Set the '''MEG gradio''' thresholds to '''[0, 2000] fT''', to detect the MEG channels that have a peak-to-peak amplitude superior to 2000fT. * '''Reject only the bad channels''': This would tag as bad only the detected channels, but would not tag the trial itself as bad. * '''Reject the entire trial''': This would tag as bad the trial if there is at least one channel file identified as bad. |
==== The experimental design ==== * The minimum duration between two stimuli defines the maximum length you can consider analyzing after the stimulus. You should design your experiment so that it always includes the entire evoked response, plus an additional segment that you can use as a baseline for the following epoch. * In this study, the inter-stimulus interval (ISI) is random between 0.7s and 1.7s. The minimum ISI (700ms) is long enough to include the entire auditory evoked response, but not the button press that follows a deviant tone. In some cases (late subject response and short ISI), the following stimulation occurs while the brain is still processing the button press. The baseline of some epochs may contain motor and somatosensory components. * For data processing, it is always better to have longer ISI, but it also means increasing the duration of the experiment or decreasing the number of repetitions, which leads to other problems. The trade-off between data quality and recording time in this experiment is acceptable, very few trials are actually contaminated by the motor response to the previous trial. We will ignore this problem in the following tutorials, but you could decide to reject these few trials in your own analysis. * Here we consider only a short baseline ('''100ms''') to avoid including too much motor activity. <<BR>>We will only study the auditory response, therefore '''500ms''' post-stimulus is enough. |
Line 96: | Line 76: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectOptions.gif|rejectOptions.gif|class="attachment"}} | ==== The processing pipeline ==== You may have to artificially extend the epochs of interest for technical reasons. Most filters cause edge effects, ie. unreliable segments of data at the beginning and the end of the signal. When applied on short epochs, they might destroy all the data of interest. |
Line 98: | Line 79: |
For this current dataset, the quality of the recordings is remarkably good, we don't need to mark any trial as bad. So right-click on the group of "left" trials > Accept trials, to set them all as good. | For avoiding this, you can '''add a few hundred milliseconds''' before and after your epoch of interest. It doesn't matter if it overlaps with the previous or the next epoch. After running the operations that required longer signals, you can '''cut your epochs''' back to the desired epoch length. Examples: |
Line 100: | Line 81: |
= From CTF = == MEG recordings == To understand what is stored in the two other files: double-click on the first, then double-click on the second. |
* '''Time-frequency (Morlet wavelets)''': <<BR>>When estimating the power at frequency ''f'' Hz, you get incorrect values for at least one period (T=1/''f'') at the beginning and the end of the signal. For example, at 2Hz you need to discard the first and last 500ms of your time-frequency maps (1/2Hz=0.5s). * '''Low-pass filtering''': <<BR>>With any filtering operation there will always be a transient effect at the beginning of the filtered data. After filtering, you need to discard the time windows corresponding to these effects. Their duration depends on the order of the filter: this is documented in the tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/ArtifactsFilter#Filter_specifications:_Low-pass.2C_high-pass.2C_band-pass|Power spectrum and frequency filters]]. * '''Hilbert transform''': <<BR>>Same considerations as for the low-pass filter. This process starts by filtering the signals in various frequency bands, using the same function as the band-pass and low-pass filters. * '''Normalizations''':<<BR>>The normalization procedures that use a baseline from the same epoch (Z-score, ERS/ERD, baseline correction) usually work better with longer baselines. The longer the clean baseline, the better the estimation of the average and standard deviation over this baseline. If your baseline is too short, the quality of your normalization will be low. <<BR>>If you normalize time-frequency maps or filtered source averages, you have to additionally exclude the edge effects from the baseline, and consider an even longer baseline. |
Line 104: | Line 86: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutImportRecordings?action=AttachFile&do=get&target=recordingsErp.gif|recordingsErp.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutImportRecordings?action=AttachFile&do=get&target=recordingsStd.gif|recordingsStd.gif|class="attachment"}} | In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis would run on most computers, including personal laptops. For any type of frequency analysis on the recordings, this will be '''too short'''. When processing your own recordings, you should increase the size of the epochs beyond the segment that you are actually planning to study. |
Line 106: | Line 88: |
* The first one is the response to the electric stimulation averaged over 400 trials, from 50ms before to 250ms after the stimulus. The values for each MEG sensor is represented by a black line, all the sensors are all overlaid on the same graph. * Roughly, you can observe a first response peak at 23ms, and a second one at about 47ms. * The second one is the standard deviation of the trials that were average, for each sensors and at each time. * You can rename them ''ERF ''(for Evoked Response Field) and ''Std'', to get something more meaningful than ''somMDYO-18av (#1)'' and ''somMDYO-18av (#2)''. Use F2 or any other renaming method. * Close the windows using the ''Close all'' button on the right of the main Brainstorm toolbar. It closes all the figures at the same time and frees all the memory. Always use this instead of closing the figures individually. . {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutImportRecordings?action=AttachFile&do=get&target=closeAllButton.gif|closeAllButton.gif|class="attachment"}} |
<<TAG(Advanced)>> |
Line 113: | Line 90: |
Now have a look at what is inside a recording file: right-click on ''Right/ERP > File > View file contents''. | == On the hard drive == Right-click on any imported epoch > File > View file contents: |
Line 115: | Line 93: |
. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutImportRecordings?action=AttachFile&do=get&target=dataFileMat.gif|dataFileMat.gif|class="attachment"}} | . {{attachment:import_struct.gif||width="477",height="418"}} |
Line 117: | Line 95: |
* '''F''': recordings time series (nChannels x nTime), in Volts * '''Comment''': String displayed in the Brainstorm database explorer to represent this file * '''ChannelFlag''': one value per channel, 1 means good, -1 means bad (not displayed or processed) * '''Time''': Time values, in seconds * '''DataType''': Type of the data saved in the F matrix (recordings, Z-score...) * '''Device''': Name of the acquisition system used to record this file * '''nAvg''': For averaged files, number of trials that were used to compute this file |
==== Structure of the imported epochs: data_*.mat ==== * '''F''': [Nchannels x Ntime] recordings time series, in Volts. * '''Std''': [Nchannels x Ntime] Standard deviation or standard error, when available (see next tutorial). * '''Comment''': String displayed in the database explorer to represent this file. * '''ChannelFlag''': [Nchannels x 1] One value per channel, 1 means good, -1 means bad. * '''Time''': [1 x Ntime] Time values for each sample recorded in F, in seconds. * '''DataType''': Type of information saved in the F matrix. * '''Device''': Name of the acquisition system used to record this file. * '''nAvg''': For averaged files, number of trials that were used to compute this file. |
Line 125: | Line 106: |
* '''History''': Operations performed on file since it was imported | * '''label''': Name of the event group. * '''color''': [r,g,b] Color used to represent the event group, in Matlab format. * '''epochs''': [1 x Nevt] Only ones for imported epochs. * '''samples''': [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq). <<BR>>For extended events: [2 x Nevt], first row = start, second row = end. * '''times''': [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq). <<BR>>For extended events: [2 x Nevt], first row = start, second row = end. * '''reactTimes''': Not used anymore. * '''select''': Not used anymore. * '''History''': Operations performed on file since it was imported (menu "View file history"). |
Line 127: | Line 115: |
== Database navigator == The Navigator menu can help you to go quickly from a dataset to another. It can be almost indispensable when your are reviewing 200 trials of the same MEG response. |
==== File history ==== Right-click on any imported epoch > File > View file history: |
Line 130: | Line 118: |
{{http://neuroimage.usc.edu/brainstorm/Tutorials/TutExploreRecodings?action=AttachFile&do=get&target=navigator.gif|navigator.gif|class="attachment"}} | . {{attachment:import_history.gif||width="603",height="197"}} |
Line 132: | Line 120: |
* You can access it from the popup menus of all the figures showing functional data. | ==== List of bad trials ==== * There is no field in the file structure that says if the trial is good or bad. <<BR>>This information is saved at the level of the folder, in the '''brainstormstudy.mat''' file. * Right-click on an imported folder > File > Show in file explorer.<<BR>><<BR>> {{attachment:import_folder.gif||width="497",height="314"}} * Load the brainstormstudy.mat file into Matlab, the bad trials are listed in the cell array "BadTrials": <<BR>><<BR>> {{attachment:import_study.gif||width="499",height="259"}} |
Line 134: | Line 125: |
* You can also use the keyboard shortcuts:'' F1, F2, F3'', together with ''Shift ''key to go backwards. | ==== Useful functions ==== * '''in_bst_data'''(DataFile, FieldsList): Read an imported epoch. * '''in_bst'''(FileName, TimeWindow): Read any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is a range of time values in seconds: [tStart, tStop]. * '''bst_process'''(''''LoadInputFile'''', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load. |
Line 136: | Line 130: |
* Now, close all the figures (use the ''Close all figures ''button) * For ''Right / ERF'', display three views : * Time series (double click) * 2D sensor cap (Ctrl+T)''' ''' * 2D Layout (right click on ERF file > Display > 2D Layout) * Press ''F3 ''once: Updates all the figures to display ''Right/Std'' and selects it in the database explorer * Press ''F3 ''again: Nothing happens, you are already at the last dataset for this subject / condition. * Press ''Shift + F3'': And you'll be back to ''ERF ''file. * Press ''F2'': Similar to F3, but jumps from a condition to another, within the same subject. * If you had many subjects you could also use ''F1 / Shift+F1''. * MacBook users: The keys "Fx" are obtained by holding the "Fn" key simultaneously. |
<<HTML(<!-- END-PAGE -->)>> |
Tutorial 15: Import epochs
Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet
We can consider that our datasets are clean from any major artifact. We will now proceed to the analysis of the brain signals we recorded in response to the auditory stimulation. There are two major types of processing workflows for MEG/EEG, depending on whether we are dealing with an event-related paradigm or a steady-state/resting-state study.
This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on these triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis.
Contents
Import in database
Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database. We will refer to these as "epochs" or "trials".
Right-click on Run#01 > Import in database.
Set the import options as described below:
Time window: Time range of interest. We are interested by all the stimulations, so do not change this parameter. The default values always represent the entire file.
Split: Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here.
Events selection: Check the "Use events" option, and select both "standard" and "deviant".
The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window)Epoch time: Time segment that is extracted around each event marker. Set it to [-100,+500]ms.
Apply SSP/ICA projectors: Use the active projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected.
Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected.Remove DC Offset: Check this option, select Time range: [-100, -1.7]ms. For each epoch, it will:
- Compute the average of each channel over the baseline (pre-stimulus interval: [-100,-1.7]ms)
- Subtract it from the channel at every time instant (full epoch interval: [-100,+500]ms).
- This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps.
- Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can efficiently replace this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option.
Resample recordings: Keep this unchecked.
Create a separate folder for each epoch type: Do not check this option.
- If selected: a new folder is created for each event type ("standard" and "deviant")
- If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct.
One new folder appears in Subject01. It contains a channel file and two trial groups.
- The channel file is copied from the continuous file.
- To expand a group of trials and show all the files: double-click on it or click on the "+" next to it.
- The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. These epochs are clean from eye blinks and power line contamination.
Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial).
Review the individual trials
After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed).
- Double-click on the first trial for the "deviant" condition.
Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar.
Right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3.
This way you can quickly review all the trials to make sure that there is no obvious problem.
Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously.
To manually tag a trial as bad, you have three options:
Right-click on the trial file in the database > Reject trial.
Right-click on the figure > Reject trial.
Use the keyboard shortcut Ctrl+B.
To set all the trials back as good in a group: right-click on the trials group > Accept bad trials.
Raster plot
You can also get an overview of the values of one specific sensor over all the trials at once.
Right-click on the group of trials "deviant" > Display as image > MEG.
- You can change the selected sensor with drop-down menu in the Display tab, or use the up/down arrows on your keyboard after clicking on the figure.
The bad trials are already marked, but if they were not this view could help you identify them easily.
Run #02
Repeat the same operations for the second dataset:
Right-click on Run#02 > Import in database.
Import events "standard" and "deviant" with the same options.
Epoch length
We imported epochs of 600ms (100ms baseline + 500ms post-stimulus) but did not justify this choice.
The length of the epochs you import should be chosen very carefully. If you realize later your epochs are too short or too long, you would have to start over your analysis from this point.
The epoch length to consider depends on:
The experimental design
- The minimum duration between two stimuli defines the maximum length you can consider analyzing after the stimulus. You should design your experiment so that it always includes the entire evoked response, plus an additional segment that you can use as a baseline for the following epoch.
- In this study, the inter-stimulus interval (ISI) is random between 0.7s and 1.7s. The minimum ISI (700ms) is long enough to include the entire auditory evoked response, but not the button press that follows a deviant tone. In some cases (late subject response and short ISI), the following stimulation occurs while the brain is still processing the button press. The baseline of some epochs may contain motor and somatosensory components.
- For data processing, it is always better to have longer ISI, but it also means increasing the duration of the experiment or decreasing the number of repetitions, which leads to other problems. The trade-off between data quality and recording time in this experiment is acceptable, very few trials are actually contaminated by the motor response to the previous trial. We will ignore this problem in the following tutorials, but you could decide to reject these few trials in your own analysis.
Here we consider only a short baseline (100ms) to avoid including too much motor activity.
We will only study the auditory response, therefore 500ms post-stimulus is enough.
The processing pipeline
You may have to artificially extend the epochs of interest for technical reasons. Most filters cause edge effects, ie. unreliable segments of data at the beginning and the end of the signal. When applied on short epochs, they might destroy all the data of interest.
For avoiding this, you can add a few hundred milliseconds before and after your epoch of interest. It doesn't matter if it overlaps with the previous or the next epoch. After running the operations that required longer signals, you can cut your epochs back to the desired epoch length. Examples:
Time-frequency (Morlet wavelets):
When estimating the power at frequency f Hz, you get incorrect values for at least one period (T=1/f) at the beginning and the end of the signal. For example, at 2Hz you need to discard the first and last 500ms of your time-frequency maps (1/2Hz=0.5s).Low-pass filtering:
With any filtering operation there will always be a transient effect at the beginning of the filtered data. After filtering, you need to discard the time windows corresponding to these effects. Their duration depends on the order of the filter: this is documented in the tutorial Power spectrum and frequency filters.Hilbert transform:
Same considerations as for the low-pass filter. This process starts by filtering the signals in various frequency bands, using the same function as the band-pass and low-pass filters.Normalizations:
The normalization procedures that use a baseline from the same epoch (Z-score, ERS/ERD, baseline correction) usually work better with longer baselines. The longer the clean baseline, the better the estimation of the average and standard deviation over this baseline. If your baseline is too short, the quality of your normalization will be low.
If you normalize time-frequency maps or filtered source averages, you have to additionally exclude the edge effects from the baseline, and consider an even longer baseline.
In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis would run on most computers, including personal laptops. For any type of frequency analysis on the recordings, this will be too short. When processing your own recordings, you should increase the size of the epochs beyond the segment that you are actually planning to study.
On the hard drive
Right-click on any imported epoch > File > View file contents:
Structure of the imported epochs: data_*.mat
F: [Nchannels x Ntime] recordings time series, in Volts.
Std: [Nchannels x Ntime] Standard deviation or standard error, when available (see next tutorial).
Comment: String displayed in the database explorer to represent this file.
ChannelFlag: [Nchannels x 1] One value per channel, 1 means good, -1 means bad.
Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.
DataType: Type of information saved in the F matrix.
Device: Name of the acquisition system used to record this file.
nAvg: For averaged files, number of trials that were used to compute this file.
Events: Time markers available in the file (stimulus triggers or other events)
label: Name of the event group.
color: [r,g,b] Color used to represent the event group, in Matlab format.
epochs: [1 x Nevt] Only ones for imported epochs.
samples: [1 x Nevt] Sample indices of each marker in this group (samples = times * sfreq).
For extended events: [2 x Nevt], first row = start, second row = end.times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq).
For extended events: [2 x Nevt], first row = start, second row = end.reactTimes: Not used anymore.
select: Not used anymore.
History: Operations performed on file since it was imported (menu "View file history").
File history
Right-click on any imported epoch > File > View file history:
List of bad trials
There is no field in the file structure that says if the trial is good or bad.
This information is saved at the level of the folder, in the brainstormstudy.mat file.Right-click on an imported folder > File > Show in file explorer.
Load the brainstormstudy.mat file into Matlab, the bad trials are listed in the cell array "BadTrials":
Useful functions
in_bst_data(DataFile, FieldsList): Read an imported epoch.
in_bst(FileName, TimeWindow): Read any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is a range of time values in seconds: [tStart, tStop].
bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load.