34466
Comment:
|
30757
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= MEG visual tutorial: Single subject = ''Authors: Francois Tadel, Elizabeth Bock.'' The aim of this tutorial is to reproduce the analysis and results o The aim of this tutorial is to provide high-quality recordings of a simple auditory stimulation and illustrate the best analysis paths possible with Brainstorm and FieldTrip. This page presents the workflow in the Brainstorm environment, the equivalent documentation for the FieldTrip environment will be available on the [[http://fieldtrip.fcdonders.nl/|FieldTrip website]]. Note that the operations used here are not detailed, the goal of this tutorial is not to teach Brainstorm to a new inexperienced user. For in depth explanations of the interface and the theory, please refer to the introduction tutorials. |
= MEG visual tutorial: Single subject (BIDS) = '''[WORK IN PROGRESS: THIS TUTORIAL IS NOT READY FOR PUBLIC USE]''' ''Authors: Francois Tadel, Elizabeth Bock. '' The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "[[ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications/SPM12_manual_chapter.pdf|Multimodal, Multisubject data fusion]]". The data processed here consists of simultaneous MEG/EEG recordings from 16 participants performing a simple visual recognition task from presentations of famous, unfamiliar and scrambled faces. The analysis is split in two tutorial pages: the present tutorial describes the detailed analysis of one single subject; the second tutorial describes batch processing and [[Tutorials/VisualGroup|group analysis of all 16 participants]]. Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the [[http://neuroimage.usc.edu/brainstorm/Tutorials#Get_started|introduction tutorials]]. |
Line 12: | Line 14: |
== Download and installation == * '''Requirements''': You have already followed all the basic tutorials and you have a working copy of Brainstorm installed on your computer. * Go to the [[http://neuroimage.usc.edu/brainstorm3_register/download.php|Download]] page of this website, and download the file: '''sample_auditory.zip ''' * Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder). This is really important that you always keep your original data files in a separate folder: the program folder can be deleted when updating the software, and the contents of the database folder is supposed to be manipulated only by the program itself. * Start Brainstorm (Matlab scripts or stand-alone version) * Select the menu File > Create new protocol. Name it "'''TutorialAuditory'''" and select the options: |
== License == This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data: <<BR>>Wakeman DG, Henson RN, [[http://www.nature.com/articles/sdata20151|A multi-subject, multi-modal human neuroimaging dataset]], Scientific Data (2015) Any questions regarding the data, please contact: rik.henson@mrc-cbu.cam.ac.uk == Presentation of the experiment == ==== Experiment ==== * 16 subjects (the original version of the dataset included 19 subjects, but 3 were excluded from the group analysis for [[http://neuroimage.usc.edu/brainstorm/Tutorials/VisualSingleOrig#Bad_subjects|various reasons]]) * 6 acquisition runs of approximately 10mins for each subject * Presentation of series of images: familiar faces, unfamiliar faces, phase-scrambled faces * Participants had to judge the left-right symmetry of each stimulus * Total of nearly 300 trials for each of the 3 conditions ==== MEG acquisition ==== * Acquisition at 1100Hz with an Elekta-Neuromag VectorView system (simultaneous MEG+EEG). * Recorded channels (404): * 102 magnetometers * 204 planar gradiometers * 70 EEG electrodes recorded with a nose reference. * MEG data have been "cleaned" using Signal-Space Separation as implemented in MaxFilter 2.2. * A Polhemus device was used to digitize three fiducial points and a large number of other points across the scalp, which can be used to coregister the M/EEG data with the structural MRI image. * Stimulation triggers: The triggers related with the visual presentation are saved in the STI101 channel, with the following event codes (bit 3 = face, bit 4 = unfamiliar, bit 5 = scrambled): * Famous faces: 5 (00101), 6 (00110), 7 (00111) * Unfamiliar faces: 13 (01101), 14 (01110), 15 (01111) * Scrambled images: 17 (10001), 18 (10010), 19 (10011) * Delays between the trigger in STI101 and the actual presentation of stimulus: '''34.5ms''' * The data distribution includes MEG noise recordings acquired around the dates of the experiment, processed with MaxFilter 2.2 in the same way as the experimental data. ==== Subject anatomy ==== * MRI data acquired on a 3T Siemens TIM Trio: 1x1x1mm T1-weighted structural MRI. * The face was removed from the strucural images for anonymization purposes. * Processed with FreeSurfer 5.3. == Download and installation [TODO] == * First, make sure you have enough space on your hard drive, at least '''350Gb''': * Raw files: '''100Gb''' * Processed files: '''250Gb''' * The data is hosted on the OpenfMRI website: https://openfmri.org/dataset/ds000117/ * Download the following files for revision 1.0.0 (approximately 70Gb): * [[https://s3.amazonaws.com/openneuro/ds000117/ds000117_R1.0.0/compressed/ds000117_R1.0.0_derivatives_sub01-04.zip|Derivatives for subjects 01-04 (16.6 GB)]] * [[https://s3.amazonaws.com/openneuro/ds000117/ds000117_R1.0.0/compressed/ds000117_R1.0.0_derivatives_sub05-08.zip|Derivatives for subjects 05-08 (16.1 GB)]] * [[https://s3.amazonaws.com/openneuro/ds000117/ds000117_R1.0.0/compressed/ds000117_R1.0.0_derivatives_sub09-12.zip|Derivatives for subjects 09-12 (16.6 GB)]] * [[https://s3.amazonaws.com/openneuro/ds000117/ds000117_R1.0.0/compressed/ds000117_R1.0.0_derivatives_sub13-16.zip|Derivatives for subjects 13-16 (16.5 GB)]] * [[https://s3.amazonaws.com/openneuro/ds000117/ds000117_R1.0.0/compressed/ds000117_R1.0.0_metadata.zip|Metadata (235.4 KB)]] * FreeSurfer segmentation: '''NOT AVAILABLE YET [TODO]''' * Empty room recordings: '''NOT AVAILABLE YET ''''''[TODO]''' * Unzip all the .zip files in the same folder.<<BR>>'''Reminder''': Do not save the downloaded files in the Brainstorm folders (program or database folders) * '''MaxFilter/tSSS''': The MEG recordings from this study were recorded on an Elekta MEG, and need to be processed with cleaning algorithms SSS or tSSS. Brainstorm currently does not offer any free alternative to Elekta's MaxFilter, therefore in this tutorial we will import the recordings already processed with MaxFilter's tSSS, available in the "derivatives" folder of the BIDS architecture. To save disk space, will not download the raw MEG recordings. This means we will not get any of the additional files available in the BIDS structure (headshape, events, channels) but it doesn't matter because all the information is directly available in the .fif files. * Start Brainstorm (Matlab scripts or stand-alone version). For help, see the [[Installation]] page. * Select the menu File > Create new protocol. Name it "'''TutorialVisual'''" and select the options: |
Line 23: | Line 70: |
This dataset is formatted following the [[https://www.biorxiv.org/content/early/2017/08/08/172684|BIDS-MEG specifications]], therefore we could import all the relevant information (MRI, FreeSurfer segmentation, MEG+EEG recordings) in just one click, with the menu '''File > Load protocol''' > '''Import BIDS dataset''', as described in the online tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/RestingOmega#BIDS_specifications|MEG resting state & OMEGA database]]. However, because your own data might not be organized following the BIDS standards, in this tutorial we preferred illustrating all the detailed steps for importing the data rather than the BIDS shortcut. This page explains how to import and process the first run of '''subject''' '''#01''' only. All the other files will have to be processed in the same way. |
|
Line 24: | Line 75: |
* Right-click on the TutorialAuditory folder > New subject > '''Subject01''' * Leave the default options you set for the protocol * Right-click on the subject node > Import anatomy folder: |
* Right-click on the TutorialVisual folder > New subject > '''sub-01''' * Leave the default options you defined for the protocol. * Right-click on the subject node > '''Import anatomy folder''': |
Line 28: | Line 79: |
* Select the folder: '''sample_auditory/anatomy''' | * Select the folder: '''derivatives/freesurfer/sub-01''' |
Line 30: | Line 81: |
* Set the 6 required fiducial points (indicated in MRI coordinates): * NAS: x=127, y=213, z=139 * LPA: x=52, y=113, z=96 * RPA: x=202, y=113, z=91 * AC: x=127, y=119, z=149 * PC: x=128, y=93, z=141 * IH: x=131, y=114, z=206 (anywhere on the midsagittal plane) * At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface. {{attachment:anatomy.gif||height="264",width="335"}} |
* When asked to select the anatomical fiducials, click on "'''Compute MNI transformation'''". This will register the MRI volume to an MNI template with an affine transformation, using SPM functions embedded in Brainstorm (spm_maff8.m). This will also place default fiducial points NAS/LPA/RPA in the MRI, based on MNI coordinates. We will use head points for the MEG-MRI coregistration, therefore we don't need accurate anatomical positions here. * Note that if you don't have a good digitized head shape for the subject, or if the final registration MEG-MRI doesn't look good with this head shape, you should not skip this step and mark accurate NAS/LPA/RPA fiducial points in the MRI, using the same anatomical convention that was used during the MRI acquisition. * Then click on [Save] to start the import. * At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface. Do not worry about the big holes in the head surface, parts of MRI have been remove voluntarily for anonymization purposes.<<BR>><<BR>> {{attachment:anatomy_import.gif||width="613",height="384"}} * All the anatomical atlases [[Tutorials/LabelFreeSurfer|generated by FreeSurfer]] were automatically imported: the cortical atlases (Desikan-Killiany, Mindboggle, Destrieux, Brodmann) and the sub-cortical regions (ASEG atlas). <<BR>><<BR>> {{attachment:anatomy_atlas.gif||width="550",height="211"}} |
Line 42: | Line 91: |
We need to attach the continuous .fif files containing the recordings to the database. |
|
Line 43: | Line 94: |
* Right-click on the subject folder > Review raw file * Select the file format: "'''MEG/EEG: CTF (*.ds...)'''" * Select all the .ds folders in: '''sample_auditory/data''' {{attachment:raw1.gif||height="156",width="423"}} * Refine registration now? '''YES'''<<BR>><<BR>> {{attachment:raw2.gif||height="224",width="353"}} === Multiple runs and head position === * The two AEF runs 01 and 02 were acquired successively, the position of the subject's head in the MEG helmet was estimated twice, once at the beginning of each run. The subject might have moved between the two runs. To evaluate visually the displacement between the two runs, select at the same time all the channel files you want to compare (the ones for run 01 and 02), right-click > Display sensors > MEG.<<BR>><<BR>> {{attachment:raw3.gif||height="220",width="441"}} * Typically, we would like to group the trials coming from multiple runs by experimental conditions. However, because of the subject's movements between runs, it's not possible to directly compare the sensor values between runs because they probably do not capture the brain activity coming from the same regions of the brain. * You have three options if you consider grouping information from multiple runs: * '''Method 1''': Process all the runs separately and average between runs at the source level: The more accurate option, but requires a lot more work, computation time and storage. * '''Method 2''': Ignore movements between runs: This can be acceptable for commodity if the displacements are really minimal, less accurate but much faster to process and easier to manipulate. * '''Method 3''': Co-register properly the runs using the process Standardize > Co-register MEG runs: Can be a good option for displacements under 2cm.<<BR>>Warning: This method has not be been fully evaluated on our side, to use at your own risk. Also, it does not work correctly if you have different SSP projectors calculated for multiple runs. * In this tutorial, we will illustrate only method 1: runs are not co-registered. === Epoched vs. continuous === * The CTF MEG system can save two types of files: epoched (.ds) or continuous (_AUX.ds). * Here we have an intermediate storage type: continuous recordings saved in an "epoched" file. The file is saved as small blocks of recordings of a constant time length (1 second in this case). All these time blocks are contiguous, there is no gap between them. * Brainstorm can consider this file either as a continuous or an epoched file. By default it imports the regular .ds folders as epoched, but we can change this manually, to process it as a continuous file. * Double-click on the "Link to raw file" for run 01 to view the MEG recordings. You can navigate in the file by blocks of 1s, and switch between blocks using the "Epoch" box in the Record tab. The events listed are relative to the current epoch.<<BR>><<BR>> {{attachment:raw4.gif||height="206",width="575"}} * Right-click on the "Link to raw file" for run 01 > '''Switch epoched/continuous''' * Double-click on the "Link to raw file" again. Now you can navigate in the file without interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.<<BR>><<BR>> {{attachment:raw5.gif||height="209",width="576"}} * Repeat this operation twice to convert all the files to a continuous mode. * '''Run 02''' > Switch epoched/continuous * '''Noise''' > Switch epoched/continuous == Stimulation triggers delay == === Evaluation === * Right-click on Run01/Link to raw file > '''Stim '''> Display time series (stimulus channel, UPPT001)<<BR>>Right-click on Run01/Link to raw file > '''ADC V''' > Display time series (audio signal generated, UADC001) * In the Record tab, set the duration of display window to '''0.200s'''.<<BR>>Jump to the third event in the "standard" category. * We can observe that there is a delay of about '''13ms''' between the time where the stimulus trigger is generated by the stimulation computer and the moment where the sound is actually played by the sound card of the stimulation computer ('''delay #1'''). This is matching the documentation of the experiment in the first section of this tutorial. <<BR>><<BR>> {{attachment:stim1.gif||height="282",width="590"}} === Correction === * '''Delay #1''': We can detect the triggers from the analog audio signal (ADC V/UADC001) rather than using the events already detected by the CTF software from the stim channel (Stim/UPPT001). * Drag and drop '''Run01 '''and '''Run02 '''to the Process1 box. * Add __'''twice'''__ the process "'''Events > Detect analog triggers'''".<<BR>>Once with event name="standard_fix" and reference event="standard".<<BR>>Once with event name="deviant_fix" and reference event="deviant".<<BR>>Set the other options as illustrated below:<<BR>><<BR>> {{attachment:stim2.gif}} * Open Run01 (channel ADC V) to evaluate the correction that was performed by this process. If you look at the third trigger in the "standard" category, you can measure a 14.6ms delay between the original event "standard" and the new event "standard_fix".<<BR>><<BR>> {{attachment:stim3.gif||height="151",width="570"}} * Open '''Run01''' to re-organize the event categories: * '''Delete '''the unused event categories: '''standard''', '''deviant'''. * '''Rename '''standard_fix and deviant_fix to '''standard''' and '''deviant'''. * Open '''Run02''' and do the same cleaning operations:<<BR>><<BR>> {{attachment:stim5.gif||height="149",width="502"}} * '''Important note''': We compensated for the jittered delays (delay #1), but not for the other ones (delays #2, #3 and #4). There is still a''' constant 5ms delay''' between the stimulus triggers ("standard" and "deviant") and the time where the sound actually reaches the subject's ears. == Detect and remove artifacts == |
* Right-click on the subject folder > '''Review raw file'''. * Select the file format: "'''MEG/EEG: Neuromag FIFF (*.fif)'''" * Go to the folder: '''derivatives/meg-derivatives/sub-01/ses-meg/meg''' * Select file: '''sub-01_ses-meg_task-facerecognition_run-01_proc-tsss_meg.fif''' <<BR>><<BR>> {{attachment:review_raw.gif||width="583",height="205"}} * Events:''' Ignore'''. We will read the stimulus triggers later.<<BR>><<BR>> {{attachment:review_ignore.gif||width="330",height="186"}} * Refine registration now? '''NO''' <<BR>>The head points that are available in the FIF files contain all the points that were digitized during the MEG acquisition, including the ones corresponding to the parts of the face that have been removed from the MRI. If we run the fitting algorithm, all the points around the nose will not match any close points on the head surface, leading to a wrong result. We will first remove the face points and then run the registration manually. === Channel classification === A few non-EEG channels are mixed in with the EEG channels, we need to change this before applying any operation on the EEG channels. * Right-click on the channel file > '''Edit channel file'''. Double-click on a cell to edit it. * Change the type of '''EEG062''' to '''EOG '''(electrooculogram). * Change the type of '''EEG063 '''to '''ECG '''(electrocardiogram). * Change the type of '''EEG061''' and '''EEG064''' to '''NOSIG''' (or any other non-informative type). Close the window and save the modifications. <<BR>><<BR>> {{attachment:channel_edit.gif||width="561",height="252"}} === MRI registration === At this point, the registration MEG/MRI is based only on the three anatomical landmarks NAS/LPA/RPA, which are not even accurately set (we used default MNI positions). All the MRI scans were anonymized (defaced) so all the head points below the nasion cannot be used. We will try to refine this registration using the additional head points that were digitized above the nasion. * Right-click on the channel file > '''Digitized head points > Remove points below nasion'''. <<BR>>(45 points removed, 92 head points left)<<BR>><<BR>> {{attachment:channel_remove.gif||width="329",height="194"}} * Right-click on the channel file > '''MRI registration > Refine using head points'''.<<BR>><<BR>> {{attachment:channel_refine.gif||width="294",height="173"}} * MEG/MRI registration, before (left) and after (right) this automatic registration procedure: <<BR>><<BR>> {{attachment:registration.gif||width="236",height="209"}} {{attachment:registration_final.gif||width="237",height="208"}} * Right-click on the channel file > '''MRI registration > EEG: Edit'''.<<BR>>Click on ['''Project electrodes on surface'''], then close the figure to save the modifications.<<BR>><<BR>> {{attachment:channel_project.gif||width="477",height="207"}} === Read stimulus triggers === We need to read the stimulus markers from the STI channels. The following tasks can be done in an interactive way with menus in the Record tab, as in the introduction tutorials. We will illustrate here how to do this with the pipeline editor, it will be easier to batch it for all the runs and all the subjects. * In Process1, select the "Link to raw file", click on [Run]. * Select process '''Events > Read from channel''', Channel: '''STI101''', Detection mode: '''Bit'''.<<BR>>Do not execute the process, we will add other processes to classify the markers.<<BR>><<BR>> {{attachment:raw_read_events.gif||width="499",height="290"}} * We want to create three categories of events, based on their numerical codes: * '''Famous faces''': 5 (00101), 6 (00110), 7 (00111) => Bit 3 only * '''Unfamiliar faces''': 13 (01101), 14 (01110), 15 (01111) => Bit 3 and 4 * '''Scrambled images''': 17 (10001), 18 (10010), 19 (10011) => Bit 5 only * We will start by creating the category "Unfamiliar" (combination of events "3" and "4") and remove the remove the initial events. Then we just have to rename the remaining "3" in "Famous", and all the "5" in "Scrambled". * Add process '''Events > Group by name''': "'''Unfamiliar=3,4'''", Delay=0, '''Delete original events''' * Add process '''Events > Rename event''': 3 => Famous * Add process '''Events > Rename event''': 5 => Scrambled <<BR>><<BR>> {{attachment:events_merge.gif||width="532",height="402"}} * Add process '''Events > Add time offset''' to correct for the presentation delays:<<BR>>Event names: "'''Famous, Unfamiliar, Scrambled'''", Time offset = '''34.5ms'''<<BR>><<BR>> {{attachment:events_offset.gif||width="348",height="355"}} * Finally run the script. Double-click on the recordings to make sure the labels were detected correctly. You can delete the unwanted events that are left in the recordings (1,2,9,13): <<BR>><<BR>> {{attachment:events_display.gif||width="595",height="194"}} == Pre-processing == |
Line 89: | Line 137: |
* One of the typical pre-processing steps consist in getting rid of the contamination due to the power lines (50 Hz or 60Hz). Let's start with the spectral evaluation of this file. * Drag '''ALL''' the "Link to raw file" to the Process1 box, or easier, just drag the node "Subject01", it will select recursively all the files in it. * Run the process "'''Frequency > Power spectrum density (Welch)'''": * Time window: '''[All file]''', Window length: '''4s''', Overlap: '''50%''', Sensor types: '''MEG''' * Note that you need at least 8Gb of RAM to run the PSD on the entire file. If you don't or if you get "Out of memory" errors, you can try running the PSD on a shorter time window. * Click on '''[Edit]''' and select option "'''Save individual PSD values''' (for each trial)".<<BR>><<BR>> {{attachment:psd1.gif||height="394",width="615"}} * Double-click on the new PSD files to display them. * Observations for '''Run''''''01''': <<BR>><<BR>> {{attachment:psd_eval01.gif||height="207",width="394"}} * Peaks related with the power lines: '''60Hz, 120Hz, 180Hz '''(240Hz and 300Hz could be observed as well depending on the window length used for the PSD) * The drop after '''600Hz''' corresponds to the low-pass filter applied at the acquisition time. * One channel indicates a higher noise than the others in high frequencies: '''MLO52''' (in red).<<BR>>We will probably mark it as bad later, when reviewing the recordings. * Observations for '''Run02''': <<BR>><<BR>> {{attachment:psd_eval02.gif||height="208",width="394"}} <<BR>> {{attachment:psd_eval02_zoom.gif||height="208",width="595"}} * Same peaks related with the power lines: '''60Hz, 120Hz, 180Hz''' * Same drop after '''600Hz'''. * Same noisy channel: '''MLO52'''. * Additionally, we observe higher level of noise in frequencies in the range of 30Hz to 100Hz on '''many occipital sensors'''. This is probably due to some tension in the neck due to an uncomfortable position. We will see later whether these channels need to be tagged as bad. === Power line contamination === * Put '''ALL''' the "Link to raw file" into the Process1 box (or directly the Subject01 folder) * Run the process: '''Pre-process > Notch filter''' * Select the frequencies: '''60, 120, 180 Hz''' * Sensor types or names: '''MEG''' * The higher harmonics are too high to bother us in this analysis, plus they are not clearly visible in all the recordings. * In output, this process creates new .ds folders in the same folder as the original files, and links the new files to the database.<<BR>><<BR>> {{attachment:psd3.gif}} * Run again the PSD process "'''Frequency > Power spectrum density (Welch)'''" on these new files, with the same parameters, to evaluate the quality of the correction. * Double-click on the new PSD files to open them.<<BR>><<BR>> {{attachment:psd5.gif||height="177",width="416"}} * Zoom in with the mouse wheel to observe what is happening around 60Hz (before / after).<<BR>><<BR>> {{attachment:psd6.gif||height="143",width="453"}} * To avoid the confusion later, delete the links to the original files: Select the folders containing the original unfiltered files and press the Delete key (or right-click > File > Delete).<<BR>><<BR>> {{attachment:psd7.gif||height="184",width="356"}} === Heartbeats and eye blinks === * Select the two AEF runs in the Process1 box. * Select successively the following processes, then click on [Run]: * '''Events > Detect heartbeats:''' Select channel '''ECG''', check "All file", event name "cardiac". * '''Events > Detect eye blinks:''' Select channel '''VEOG''', check "All file", event name "blink". * '''Events > Remove simultaneous''': Remove "'''cardiac'''", too close to "'''blink'''", delay '''250ms'''. * '''Compute SSP: Heartbeats''': Event name "cardiac", sensors="MEG", '''do not use existing SSP'''. * '''Compute SSP: Eye blinks''': Event name "blink", sensors="MEG", '''do not use existing SSP'''.<<BR>><<BR>> {{attachment:ssp_pipeline.gif||height="359",width="527"}} * Double-click on '''Run01 '''to open the MEG.<<BR>>You can change the color of events "standard" and "deviant" to make the figure more readable. * Review the '''EOG '''and '''ECG '''channels and make sure the events detected make sense.<<BR>><<BR>> {{attachment:events.gif||height="369",width="557"}} * In the Record tab, menu SSP > '''Select active projectors'''. * Blink: The first component is selected and looks good. * Cardiac: The category is disabled because no component has a value superior to 12%. * Select the first component of the cardiac category and display its topography. * It looks exactly like a cardiac topography, keep it selected and click on [Save].<<BR>><<BR>> {{attachment:ssp_result.gif||height="172",width="700"}} * Repeat the same operations for '''Run02''': * Review the events. * Select the first cardiac component. === Bad segments === * At this point, you should review the entire files, by pages of a few seconds scrolling with the F3 key, to identify all the bad channels and the noisy segments of recordings. Do this with the the EOG channel open at the same time to identify saccades or blinks that were not completely corrected with the SSP projectors. As this is a complicated task that requires some expertise, we have prepared a list of bad segments for these datasets. * Open '''Run01'''. In the Record tab, select '''File > Add events from file''': * File name: sample_auditory/data/S01_AEF_20131218_01_notch/'''events_bad_01.mat ''' * File type: Brainstorm (events*.mat) * It adds '''12 bad segments''' to the file. * Open '''Run02'''. In the Record tab, select '''File > Add events from file''': * File name: sample_auditory/data/S01_AEF_20131218_02_notch/'''events_bad_02.mat ''' * File type: Brainstorm (events*.mat) * It adds '''9 bad segments''' and '''16 saccades''' to the file. === Saccades === * Run02 contains a few saccades that generate a large amount of noise in the MEG recordings. They are not identified well by the automatic detection process based on the horizontal EOG. We have marked some of them, you have already loaded these events together with the bad segments. We are going to use again the SSP technique to remove the spatial components associated with these saccades. * Open the MEG recordings for '''Run02''' and select the right-frontal sensors (Record tab > CTF RF). * In the Record tab, menu SSP > Compute SSP: Generic<<BR>>Event name='''saccade''', Time='''[0,500]ms''', Frequency='''[1,15]Hz''', '''Use existing SSP'''<<BR>><<BR>> {{attachment:ssp_saccade_process.gif||height="442",width="329"}} * Example of saccade without correction:<<BR>><<BR>> {{attachment:ssp_saccade_before.gif||height="259",width="711"}} * With the first component of saccade SSP applied: <<BR>><<BR>> {{attachment:ssp_saccade_after.gif||height="283",width="507"}} * This first component removes really well the saccade, keep it selected and click on [Save]. === Bad channels === * During the visual exploration, some channels appeared generally noisier than the others. Example: <<BR>><<BR>> {{attachment:badchannel02.gif||height="200",width="456"}} * Right-click on '''Run01 '''> Good/bad channels > Mark some channels as bad<<BR>> > '''MRT51, MLO52''' * Right-click on '''Run02 '''> Good/bad channels > Mark some channels as bad<<BR>> > '''MRT51, ''''''MLO52, MLO42, MLO43''' |
* Keep the "Link to raw file" in Process1. * Run process '''Frequency > Power spectrum density (Welch)''' with the options illustrated below. <<BR>><<BR>> {{attachment:psd_process.gif||width="669",height="314"}} * Right-click on the PSD file > Power spectrum. <<BR>><<BR>> {{attachment:psd_plot.gif||width="408",height="235"}} * The MEG sensors look awful, because of one small segment of data located around 248s. Open the MEG recordings and scroll to 248s (just before the first Unfamiliar event). <<BR>><<BR>> {{attachment:psd_error.gif||width="527",height="183"}} * In these recordings, the continuous head tracking was activated, but it starts only at the time of the stimulation (248s) while the acquisition of the data starts 20s before (226s). The first 20s do not have head localization coils (HPI) coils activity and are not corrected by MaxFilter. After 248s, the HPI coils are on, and MaxFilter filters them out. The transition between the two states is not smooth and creates important distortions in the spectral domain. For a proper evaluation of the recordings, we should '''compute the PSD only after the HPI coils are turned on'''. * Run process '''Events > Detect cHPI activity (Elekta)'''. This detects the changes in the cHPI activity from channel STI201 and marks all the data without head localization as bad. <<BR>><<BR>> {{attachment:psd_badsegment.gif||width="503",height="234"}} * Re-run the process '''Frequency > Power spectrum density (Welch)'''. All the bad segments are excluded from the computation, therefore the PSD is now estimated only with the data '''after 248s'''. <<BR>><<BR>> {{attachment:psd_fix.gif||width="668",height="283"}} * Observations: * Three groups of sensors, from top to bottom: EEG, MEG gradiometers, MEG magnetometers. * Power lines: '''50 '''Hz and harmonics * Alpha peak around 10 Hz * Artifacts due to Elekta electronics (HPI coils): '''293'''Hz, '''307'''Hz, '''314'''Hz, '''321'''Hz, '''328'''Hz. * Peak from unknown source at '''103.4Hz''' in the MEG only. * Suspected bad EEG channels: '''EEG016''' * Close all the windows. === Remove line noise === * Keep the "Link to raw file" in Process1. * Select process '''Pre-process > Notch filter''' to remove the line noise (50-200Hz).<<BR>>Add the process '''Frequency > Power spectrum density (Welch)'''. <<BR>><<BR>> {{attachment:notch_process.gif||width="562",height="262"}} * Double-click on the PSD for the new continuous file to evaluate the quality of the correction. <<BR>><<BR>> {{attachment:notch_result.gif||width="566",height="215"}} * Close all the windows (use the [X] button at the top-right corner of the Brainstorm window). === EEG reference and bad channels === * Right-click on link to the processed file ("Raw | notch(50Hz ...") > '''EEG > Display time series'''. * Select channel '''EEG016''' and mark it as '''bad''' (using the popup menu or pressing the Delete key). <<BR>><<BR>> {{attachment:channel_bad.gif||width="587",height="214"}} * In the Record tab, menu '''Artifacts > Re-reference EEG''' > "AVERAGE". <<BR>><<BR>> {{attachment:channel_ref.gif||width="535",height="270"}} * At the end, the window "select active projectors" is open to show the new re-referencing projector. Just close this window. To get it back, use the menu Artifacts > Select active projectors. == Artifact detection == === Heartbeats: Detection === * Empty the Process1 list (right-click > Clear list). * Drag and drop the continuous processed file ("Raw | notch(50Hz...)") to the Process1 list. * Run process''' Events > Detect heartbeats''': Channel name='''EEG063''', All file, Event name=cardiac <<BR>><<BR>> {{attachment:detect_cardiac.gif||width="310",height="230"}} === Eye blinks: Detection === * In many of the other tutorials, we detect the blinks and remove them with SSP. In this experiment, we are particularly interested in the subject's response to seeing the stimulus. Therefore we will exclude from the analysis all the recordings contaminated with blinks or other eye movements. * Run process '''Artifacts > Detect events above threshold''': <<BR>>Event name='''blink_BAD''', Channel='''EEG062''', All file, Maximum threshold='''100''', Threshold units='''uV''', Filter='''[0.30,20.00]Hz''', '''Use absolute value of signal'''.<<BR>><<BR>> {{attachment:detect_blinks.gif||width="285",height="455"}} * Inspect visually the two new cateogries of events: cardiac and blink_BAD.<<BR>><<BR>> {{attachment:detect_display.gif||width="655",height="237"}} * Close all the windows (using the [X] button). === Heartbeats: Correction with SSP === * Keep the "Raw | notch" file selected in Process1. * Select process '''Artifacts > SSP: Heartbeats''' > Sensor type: '''MEG MAG''' * Add process '''Artifacts > SSP: Heartbeats''' > Sensor type: '''MEG GRAD''' - Run the execution <<BR>><<BR>> {{attachment:ssp_ecg_process.gif||width="293",height="225"}} * Double-click on the continuous file to show all the MEG sensors. <<BR>>In the Record tab, select sensors "Left-temporal". * Menu Artifacts > Select active projectors. * In category '''cardiac/MEG MAG''': Select '''component #1''' and view topography. * In category '''cardiac/MEG GRAD''': Select '''component #1 '''and view topography. <<BR>><<BR>> {{attachment:ssp_ecg_topo.gif||width="604",height="175"}} * Make sure that selecting the two components removes the cardiac artifact. Then click '''[Save]'''. === Additional bad segments === * Process1>Run: Select the process "'''Events > Detect other artifacts'''". This should be done separately for MEG and EEG to avoid confusion about which sensors are involved in the artifact. <<BR>><<BR>> {{attachment:detect_other.gif||width="486",height="253"}} * Display the MEG sensors. Review the segments that were tagged as artifact, determine if each event represents an artifact and then mark the time of the artifact as BAD. This can be done by selecting the time window around the artifact, then right-click > '''Reject time segment'''. Note that this detection process marks 1-second segments but the artifact can be shorter.<<BR>><<BR>> {{attachment:artifacts_mark_bad.png||width="640"}} * Once all the events in the two categories are reviewed and bad segments are marked, the two categories (1-7Hz and 40-240Hz) can be deleted. * Do this detection and review again for the EEG. === SQUID jumps === MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps ([[http://neuroimage.usc.edu/brainstorm/Tutorials/BadSegments?highlight=(squid)#Elekta-Neuromag_SQUID_jumps|more information]]). These sharp steps followed by a change of baseline value are easy to identify visually but more complicated to detect automatically. The process "Detect other artifacts" usually detects most of them in the category "1-7Hz". If you observe that some are skipped, you can try re-running it with a higher sensitivity. It is important to review all the sensors and all the time in each run to be sure these events are marked as bad segments. . {{attachment:detect_jumps.gif||width="547",height="214"}} |
Line 165: | Line 206: |
=== Import recordings === To import epochs from '''Run01''': * Right-click on the "Link to raw file" > '''Import in database''' * Use events: "'''standard'''" and "'''deviant'''" * Epoch time: '''[-100, +500] ms''' * Apply the existing SSP (make sure that you have 2 selected projectors) * '''Remove DC''' '''offset '''based on time window: '''[-100, 0] ms''' * '''UNCHECK''' the option "Create a separate folder for each epoch type", this way all the epochs are going to be saved in the same Run01 folder, and we will able to separate the trials from Run01 and Run02.<<BR>><<BR>> {{attachment:import1.gif||height="344",width="521"}} * Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot).<<BR>><<BR>> {{attachment:import2.gif}} Repeat the same operation for '''Run02''': * Right-click on the "Link to raw file" > '''Import in database''' * Use the same options as for the previous run.<<BR>><<BR>> {{attachment:import3.gif||height="235",width="216"}} === Average responses === * As said previously, it is usually not recommended to average recordings in sensor space across multiple acquisition runs because the subject might have moved between the sessions. Different head positions were recorded for each run, we will reconstruct the sources separately for each each run to take into account these movements. * However, in the case of event-related studies it makes sense to start our data exploration with an average across runs, just to evaluate the quality of the evoked responses. We have seen that the subject almost didn't move between the two runs, so the error would be minimal. We will compute now an approximate sensor average between runs, and we will run a more formal average in source space later. * We have 80 good "deviant" trials that we want to average together. * Select the trial groups "deviant" from both runs in Process1, run process "'''Average > Average files'''"<<BR>>Select the option "'''By trial group (subject average)'''"<<BR>><<BR>> {{attachment:process_average_data1.gif||height="456",width="470"}} * To compare properly this "deviant" average with the other condition, we need to use the same number of trials in the "standard" condition. We are going to pick 40 "standard" trials from Run01 and 40 from Run02. To make it easy, let's take the 40 first good trials. * Select the '''41 '''first "standard" trials of Run01 + the '''41 '''first "standard" trials of Run02 in Process1.<<BR>>This will sum to '''80 '''selected files, because the Process1 tab ignores the bad trials (trial #37 is bad in Run01, trial #36 is bad in Run02) * Run again process "'''Average > Average files'''" > "'''By trial group (subject average)'''"<<BR>><<BR>> {{attachment:process_average_data2.gif||height="444",width="471"}} * The average for the two conditions "standard" and "deviant" are saved in the folder ''(intra-subject)''. The channel file added to this folder is an average of the channel files from Run01 and Run02.<<BR>><<BR>> {{attachment:average_sensor_files.gif||height="205",width="220"}} === Visual exploration === * Display the two averages, "standard" and "deviant": * Right-click on average > MEG > Display time series * Right-click on average > MISC > Display time series (EEG electrodes Cz and Pz) * Right-click on average > MEG > 2D Sensor cap * In the Filter tab, add a '''low-pass filter''' at '''100Hz'''. * Right-click on the 2D topography figures > Snapshot > Time contact sheet. * Here are results for the standard (top) and deviant (bottom) beeps:<<BR>><<BR>> {{attachment:average_sensor.gif||height="399",width="703"}} * '''P50''': 50ms, bilateral auditory response in both conditions. * '''N100''': 95ms, bilateral auditory response in both conditions. * '''MMN''': 100-200ms, mismatch negativity in the deviant condition only (detection of deviant). * '''P200''': 170ms, in both conditions but much stronger in the standard condition. * '''P300''': 300-400ms, deviant condition only (decision making in preparation of the button press). * '''Standard '''(right-click on the topography figure > Snapshot > Time contact sheet) : <<BR>><<BR>> {{attachment:average_sensor_standard.gif||height="286",width="370"}} * '''Deviant''': <<BR>><<BR>> {{attachment:average_sensor_deviant.gif||height="300",width="371"}} === Difference deviant-standard === * In the Process2 tab, select the deviant average (Files A) and the standard average (Files B). * Run the process "'''Other > Difference A-B'''"<<BR>><<BR>> {{attachment:average_sensor_diff.gif||height="293",width="340"}} * The difference deviant-standard does not show anymore the early responses (P50, P100) but emphasizes the difference in the later process (MMN/P200 and P300). <<BR>><<BR>> {{attachment:average_sensor_diff2.gif||height="214",width="456"}} |
=== Import epochs === * Keep the "Raw | notch" file selected in Process1. * Select process: '''Import recordings > Import MEG/EEG: Events''' (do not run immediately)<<BR>>Event names "Famous, Unfamiliar, Scrambled", All file, Epoch time=[-500,1200]ms * Add process: '''Pre-process > Remove DC offset''': Baseline=[-500,-0.9]ms - Run execution.<<BR>><<BR>> {{attachment:import_epochs.gif||width="647",height="399"}} === Average by run === * In Process1, select all the imported trials. * Run process: '''Average > Average files''': '''By trial groups (folder average)''' <<BR>><<BR>> {{attachment:average_process.gif||width="478",height="504"}} === Review EEG ERP === * EEG evoked response (famous, scrambled, unfamiliar): <<BR>><<BR>> {{attachment:average_topo.gif||width="684",height="392"}} * Open the [[Tutorials/ChannelClusters|Cluster tab]] and create a cluster with the channel '''EEG065''' (button [NEW IND]).<<BR>><<BR>> {{attachment:cluster_create.gif||width="569",height="166"}} * Select the cluster, select the three average files, right-click > '''Clusters time series''' (Overlay:Files). <<BR>><<BR>> {{attachment:cluster_display.gif||width="658",height="220"}} * Basic observations for EEG065 (right parieto-occipital electrode): * Around 170ms (N170): greater negative deflection for Famous than Scrambled faces. * After 250ms: difference between Famous and Unfamiliar faces. |
Line 214: | Line 225: |
=== Head model === * Select the two imported folders at once, right-click > Compute head model<<BR>><<BR>> {{attachment:headmodel1.gif||height="221",width="413"}} * Use the '''overlapping spheres''' model and keep all of the options at their default values.<<BR>><<BR>> {{attachment:headmodel2.gif||height="234",width="209"}} {{attachment:headmodel3.gif||height="208",width="244"}} * For more information: [[Tutorials/TutHeadModel|Head model tutorial]]. === Noise covariance matrix === * We want to calculate the noise covariance from the empty room measurements and use it for the other runs. * In the '''Noise''' folder, right-click on the Link to raw file > Noise covariance > Compute from recordings.<<BR>><<BR>> {{attachment:noisecov1.gif||height="253",width="392"}} * Keep all the default options and click [OK]. <<BR>><<BR>> {{attachment:noisecov2.gif||height="285",width="291"}} * Right-click on the noise covariance file > Copy to other conditions.<<BR>><<BR>> {{attachment:noisecov3.gif||height="181",width="232"}} * You can double-click on one the copied noise covariance files to check what it looks like:<<BR>><<BR>> {{attachment:noisecov4.gif||height="232",width="201"}} * For more information: [[Tutorials/TutNoiseCov|Noise covariance tutorial]]. === Inverse model === * Select the two imported folders at once, right-click > Compute sources<<BR>><<BR>> {{attachment:inverse1.gif||height="192",width="282"}} * Select '''dSPM''' and keep all the default options.<<BR>><<BR>> {{attachment:inverse2.gif||height="205",width="199"}} * Then you are asked to confirm the list of bad channels to use in the source estimation for each run. Just leave the defaults, which are the channels that we set as bad earlier.<<BR>><<BR>> {{attachment:inverse_bad1.gif||height="202",width="326"}} * One inverse operator is created in each condition, with one link per data file.<<BR>><<BR>> {{attachment:inverse3.gif||height="240",width="202"}} * For more information: [[Tutorials/TutSourceEstimation|Source estimation tutorial]]. === Average in source space === * Now we have the source maps available for all the trials, we average them in source space. * Select the folders for '''Run01 '''and '''Run02 '''and the ['''Process sources'''] button on the left. * Run process "'''Average > Average files'''":<<BR>>Select "'''By trial group (subject average)'''"<<BR>><<BR>> {{attachment:process_average_results.gif||height="376",width="425"}} * Double-click on the source averages to display them (standard=top, deviant=bottom).<<BR>><<BR>> {{attachment:average_source.gif||height="321",width="780"}} * Note that opening the source maps can be very long because of the online filters. Check in the Filter tab, you probably still have a '''100Hz low-pass filter''' applied for the visualization. In the case of averaged source maps, the 15000 source signals are filtered on the fly when you load a source file. This can take a significant amount of time. You may consider unchecking this option if the display is too slow on your computer. * '''Standard:''' (Right-click on the 3D figures > Snapshot > Time contact sheet)<<BR>><<BR>> {{attachment:average_source_standard_left.gif||height="263",width="486"}} <<BR>> {{attachment:average_source_standard_right.gif||height="263",width="486"}} * '''Deviant:'''<<BR>><<BR>> {{attachment:average_source_deviant_left.gif||height="263",width="486"}} <<BR>> {{attachment:average_source_deviant_right.gif||height="263",width="486"}} * '''Movies''': Right-click on any figure > Snapshot > '''Movie (time): All figures''' (click to download video)<<BR>><<BR>>[[http://neuroimage.usc.edu/wikidocs/average_sources.avi|{{attachment:average_source_video.gif|http://neuroimage.usc.edu/brainstorm/wikidocs/average_sources.avi|height="258",width="484"}}]] === Difference deviant-standard === * In the Process2 tab, select the average of the deviant sources (Files A) and the average of the standard sources (Files B). * Run the process "'''Other > Difference A-B'''"<<BR>><<BR>> {{attachment:average_source_diff.gif||height="347",width="424"}} * Double-click on the difference to display it, explore it in time.<<BR>><<BR>> {{attachment:average_source_diff_left.gif||height="252",width="455"}} <<BR>> {{attachment:average_source_diff_right.gif||height="252",width="455"}} * The first observations we can make are the following: * '''P50''': No important difference. * '''N100''': Stronger response in the right auditory system for the deviant condition. * '''MMN '''(125ms): Stronger response for the deviant (left auditory, right temporal/frontal/motor). * '''P200 '''(175ms): Stronger response in the auditory system for the standard condition. * '''After 200ms''': Stronger response in the deviant condition (left auditory, left motor, right auditory, right temporal, right motor, right parietal) * Alternatively, you could calculate the difference of the average of all the "deviant" (80) and all the "standard" (388) trials, using the process "'''Other > Weighted difference'''". It is an attempt to compensate for the difference of number of trials. === Student's t-test === * Using a t-test instead of the difference of the two averages, you can reproduce similar results but with a significance level attached to each value. With this test, we can also use all the trials we have, unlike the difference of the means: The t-test behaves very well with imbalanced designs, we can keep all the standard trials. * In the Process2 tab, select the following files: * Files A: All the deviant trials, with the '''[Process sources]''' button selected. * Files B: All the standard trials, with the '''[Process sources]''' button selected. * Run the process "'''Test > Student's t-test'''", Equal variance, Absolute value of average.<<BR>><<BR>> {{attachment:ttest_source1.gif||height="406",width="699"}} * Double-click on the t-test file to open it. Set the options in the Stat tab: * p-value threshold: '''0.05''' * Multiple comparisons: '''FDR''' * Control over dimensions: '''1.Signals''' and '''2.Time''' * Explore the results in time.<<BR>><<BR>> {{attachment:ttest_source2_left.gif||height="231",width="415"}} <<BR>> {{attachment:ttest_source2_right.gif||height="231",width="415"}} == Regions of interest == === Manual tracing === * Let's place all the regions of interest starting from the easiest to identify. * Open the average source files (standard and deviant), together with the average recordings for the standard condition for an easier time navigation. * In the Surface tab, smooth the cortical surface at '''70%'''. * For each region: go to the indicated time point, adjust the amplitude threshold in the Surface tab, identify the area of interest, click on its center, grow the scout, rename it. * Grow all the regions to the same size: '''20 vertices'''. * Note that all the following screen captures are produced with a online low-pass filter at 100Hz. * '''A1L''': Left primary auditory cortex (Heschl gyrus) * The most visible region in both conditions. Active during all the main steps of the auditory processing: P50, N100, MMN, P200, P300. * '''Standard '''condition, t='''90ms''', amplitude threshold='''70%'''<<BR>><<BR>> {{attachment:scout_a1l.gif||height="162",width="650"}} * '''A1R''': Right primary auditory cortex (Heschl gyrus) * The position of this region is a lot less obvious than A1L, we don't see one focal region with a sustained activity. These binaural auditory stimulations should be generating similar bilateral responses in both left and right auditory cortices at early latencies. Possible explanations for this observation: * The earplug was not adjusted on the right side and the sound was not well delivered. * The subject's hearing from the right ear is impaired. * The response is actually stronger in the left auditory cortex for this subject. * The orientation of the source makes it difficult to capture for the MEG sensors. * We are trying to find a region that peaks at the same time as A1L (95ms and 200ms in the standard condition). It is very difficult to find anything that behaves this way in both the deviant and the standard condition, so we will pick something very approximate, knowing that we cannot really rely on this region. The auditory system is very dynamic, in squared centimeters of cortex we can observe many functionally independent regions activated at different moments. * '''Deviant '''condition, t='''34ms''', amplitude threshold='''20%''' * Results are ok for the deviant condition but not so good for the standard condition. <<BR>><<BR>> {{attachment:scout_a1r.gif||height="162",width="648"}} * '''IFGL''': Left inferior frontal gyrus (Brodmann area 44) * Involved in the auditory processing, particularly while processing irregularities. * You can use the atlas "Brodmann-thresh" available in the Scout tab for identifying this region. * '''Deviant '''condition, t='''140ms''', amplitude threshold='''40% '''<<BR>><<BR>> {{attachment:scout_ifgl.gif||height="162",width="648"}} * '''IFGR''': Right inferior frontal gyrus (Brodmann area 44''')''' * Expected to have an activity similar to the left IFG. * '''Deviant '''condition, t='''110ms''', amplitude threshold='''40% '''<<BR>><<BR>> {{attachment:scout_ifgr.gif||height="162",width="648"}} * '''M1L''': Left motor cortex * The subject taps with the right index when a deviant is presented. * The motor cortex responds at very early latencies together with the auditory cortex, in both conditions (50ms and 100ms). The subject is ready for a fast response to the task. * At 175ms, the peak in the deviant condition probably corresponds to an inhibition: the sound heard is not a deviant, there is no further motor processing required. * At 225ms, the peak in the standard condition is probably a motor preparation. At 350ms, the motor task begins, the subject moves the right hand (recorded reaction times 500ms +/- 108ms). * '''Deviant '''condition, t='''240ms''', amplitude threshold='''50% '''<<BR>><<BR>> {{attachment:scout_m1l.gif||height="162",width="648"}} * '''M1R''': Right motor cortex * Probably involved in the preparation of the motor response as well. Less recruited during the actual motor command. * '''Deviant '''condition, t='''35ms''', amplitude threshold='''25%'''''' '''<<BR>><<BR>> {{attachment:scout_m1r.gif||height="162",width="648"}} * '''PPCR''': Right posterior parietal cortex * Known to play a role as a relay in the auditory processing. * '''Deviant '''condition, t='''225ms''', amplitude threshold='''60%'''''' '''<<BR>><<BR>> {{attachment:scout_ppcr.gif||height="163",width="652"}} === Latency comparisons [TODO] === === Influence of the number of trials === * We have decided to run the source analysis on the same number of trials for both conditions. We have been working so far with an average of the standard condition calculated from 80 trials. Just out of curiosity, we can recalculate another average with all the good standard trials (388). * Here are the scouts traces for both averages (80 trials in green, 388 trials in red):<<BR>><<BR>> {{attachment:scouts_ntrials.gif||height="266",width="607"}} * As expected, the signal is cleaner in the average with more trials, but it is interesting to note that the overall shape of the traces does not change. The main effects observed are similar, the latencies are identical, multiplying the number of trials by five does not change much the interpretation. == Time-frequency [TODO] == * Select in Process1 all the trials for the deviant condition (80 files) and the 41 first trials of each run for the deviant condition (80 files), just like you did when calculating the averages. * Select the ['''Process sources'''] button. * Run the process "'''Frequency > Time-frequency (Morlet wavelets)'''" * This produced one time-frequency file per trial in input: 80 for each condition. Now we want to average the time-frequency maps of all the trials for each condition separately. * Select the ['''Process time-freq'''] button. * Run the process "'''Average > Average files'''" > "'''By trial group (subject average)'''" * Z-score + Difference of the averages * t-test == Coherence [TODO] == == Dipole scanning [TODO] == == Phase-amplitude coupling [TODO] == == Discussion == Discussion about the choice of the dataset on the FieldTrip bug tracker:<<BR>>http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300 == Scripting [TODO] == ==== Process selection ==== ==== Graphic edition ==== ==== Generate Matlab script ==== The operations described in this tutorial can be reproduced from a Matlab script, available in the Brainstorm distribution: '''brainstorm3/toolbox/script/tutorial_auditory.m ''' <<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/Auditory)>> |
=== MEG noise covariance: Empty room recordings === The minimum norm model we will use next to estimate the source activity can be improved by modeling the the noise contaminating the data. The section shows how to [[Tutorials/NoiseCovariance|estimate the noise covariance]] in different ways for EEG and MEG. For the MEG recordings we will use the empty room measurements we have, and for the EEG we will compute it from the pre-stimulus baselines we have in all the imported epochs. * Create a new subject: '''emptyroom''' * Right-click on the new subject > '''Review raw file'''.<<BR>>Select file: '''sample_group/emptyroom/090707_raw_st.fif '''<<BR>>Do not apply default transformation, Ignore event channel.<<BR>><<BR>> {{attachment:noise_review.gif||width="506",height="164"}} * Select this new file in Process1 and run process '''Pre-process > Notch filter''': '''50 100 150 200Hz'''. When using empty room measurements to compute the noise covariance, they must be processed exactly in the same way as the other recordings. <<BR>><<BR>> {{attachment:noise_notch.gif||width="436",height="225"}} * Right-click on the filtered noise recordings > '''Noise covariance > Compute from recordings''': <<BR>><<BR>> {{attachment:noise_compute.gif||width="649",height="240"}} * Right-click on the '''Noise covariance > Copy to other subjects '''<<BR>><<BR>> {{attachment:noise_copy.gif||width="273",height="126"}} === EEG noise covariance: Pre-stimulus baseline === * In folder sub002/run_01_sss_notch, select all the imported the imported trials, right-click > '''Noise covariance > Compute from recordings''', Time=[-500,-0.9]ms, '''EEG only''', '''Merge'''. {{attachment:noise_eeg.gif||width="650",height="280"}} * This computes the noise covariance only for EEG, and combines it with the existing MEG information. {{attachment:noise_display.gif||width="375",height="223"}} === BEM layers === We will compute a [[Tutorials/TutBem|BEM forward model]] to estimate the brain sources from the EEG recordings. For this, we need some layers defining the separation between the different tissues of the head (scalp, inner skull, outer skull). * Go to the anatomy view (first button above the database explorer). * Right-click on the subject folder > '''Generate BEM surfaces''': The number of vertices to use for each layer depends on your computing power and the accuracy you expect. You can try for instance with '''1082 vertices '''(scalp) and '''642 vertices '''(outer skull and inner skull). <<BR>><<BR>> {{attachment:anatomy_bem.gif||width="603",height="232"}} === Forward model: EEG and MEG === * Go back to the functional view (second button above the database explorer). * Model used: Overlapping spheres for MEG, OpenMEEG BEM for EEG ([[Tutorials/HeadModel|more information]]). * In folder sub002/run_01_sss_notch, right-click on the channel file > '''Compute head model'''.<<BR>>Keep all the default options . Expect this to take a while...<<BR>><<BR>> {{attachment:headmodel_compute.gif||width="643",height="313"}} === Inverse model: Minimum norm estimates === * Right-click on the new head model > '''Compute sources [2016]''': '''MEG MAG + GRAD''' (default options) <<BR>><<BR>> {{attachment:sources_compute.gif||width="626",height="406"}} * Right-click on the new head model > '''Compute sources [2016]''': '''EEG''' (default bad channels). * At the end we have two inverse operators, that are shared for all the files of the run (single trials and averages). If we wanted to look at the run-level source averages, we could normalize the source maps with a Z-score wrt baseline. In this tutorial, we will first average across runs and normalize the subject-level averages. This will be done in the next tutorial (group analysis).<<BR>><<BR>> {{attachment:sources_files.gif||width="243",height="147"}} == Time-frequency analysis == We will compute the time-frequency decomposition of each trial using Morlet wavelets, and average the power of the Morlet coefficients for each condition and each run separately. We will restrict the computation to the MEG magnetometers and the EEG channels to limit the computation time and disk usage. * In Process1, select the imported trials "Famous" for run#01. * Run process '''Frequency > Time-frequency (Morlet wavelets)''': Sensor types=MEG MAG,EEG <<BR>>Not normalized, Frequency='''Log(6:20:60)''', Measure=Power, '''Save average''' <<BR>><<BR>> {{attachment:tf_process.gif||width="628",height="479"}} * Double-click on the file to display it. In the Display tab, select the option "Hide edge effects" to exclude form the display all the values that could not be estimated in a reliable way. Let's extract only the good values from this file (-200ms to +900ms). <<BR>><<BR>> {{attachment:tf_display.gif||width="270",height="155"}} * In Process1, select the time-frequency file. * Run process '''Extract > Extract time''': Time window='''[-200, 900]ms''', '''Overwrite input files''' <<BR>><<BR>> {{attachment:tf_cut.gif||width="411",height="197"}} * Display the file again, observe that all the possibly bad values are gone. <<BR>><<BR>> {{attachment:tf_cutdisplay.gif||width="410",height="178"}} * You can display all the sensors at once (MEG MAG or EEG): right-click > 2D Layout (maps).<<BR>><<BR>> {{attachment:tf_2dlayout.gif||width="653",height="270"}} * Repeat these steps for other conditions (Scrambled and Unfamiliar) and the other runs (2-6). There is no way with this process to compute all the averages at once, as we did with the process "Average files". This will be easier to run from a script. * If we wanted to look at the run-level source averages, we could normalize these time-frequency maps. In this tutorial, we will first average across runs and normalize the subject-level averages. This will be done in the next tutorial (group analysis). == Scripting == We have now all the files we need for the group analysis ([[Tutorials/VisualGroup|next tutorial]]). We need to repeat the same operations for all the runs and all the subjects. Some of these steps are '''fully automatic''' and take a lot of time (filtering, computing the forward model), they should be executed from a script. However, we recommend you always '''review manually''' some of the pre-processing steps (selection of the bad segments and bad channels, SSP/ICA components). Do not trust blindly any fully automated cleaning procedure. For the strict reproducibility of this analysis, we provide a script that processes all the 19 subjects: '''brainstorm3/toolbox/script/tutorial_visual_single.m''' (execution time: 10-30 hours)<<BR>>Report for the first subject: [[http://neuroimage.usc.edu/bst/examples/report_TutorialVisual_sub001.html|report_TutorialVisual_sub001.html]] <<HTML(<div style="border:1px solid black; background-color:#EEEEFF; width:720px; height:500px; overflow:scroll; padding:10px; font-family: Consolas,Menlo,Monaco,Lucida Console,Liberation Mono,DejaVu Sans Mono,Bitstream Vera Sans Mono,Courier New,monospace,sans-serif; font-size: 13px; white-space: pre;">)>><<EmbedContent("http://neuroimage.usc.edu/bst/viewcode.php?f=tutorial_visual_single.m")>><<HTML(</div >)>> <<BR>>You should note that this is not the result of a fully automated procedure. The bad channels were identified manually and are defined for each run in the script. The bad segments were detected automatically, confirmed manually for each run and saved in external files distributed with the Brainstorm package sample_group_freesurfer.zip (sample_group/brainstorm/bad_segments/*.mat). All the process calls (bst_process) were generated automatically using with the script generator (menu '''Generate .m script''' in the pipeline editor). Everything else was added manually (loops, bad channels, file copies). <<EmbedContent("http://neuroimage.usc.edu/bst/get_prevnext.php?prev=&next=Tutorials/VisualGroup")>> <<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/VisualSingle)>> |
MEG visual tutorial: Single subject (BIDS)
[WORK IN PROGRESS: THIS TUTORIAL IS NOT READY FOR PUBLIC USE]
Authors: Francois Tadel, Elizabeth Bock.
The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "Multimodal, Multisubject data fusion". The data processed here consists of simultaneous MEG/EEG recordings from 16 participants performing a simple visual recognition task from presentations of famous, unfamiliar and scrambled faces.
The analysis is split in two tutorial pages: the present tutorial describes the detailed analysis of one single subject; the second tutorial describes batch processing and group analysis of all 16 participants.
Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the introduction tutorials.
Contents
License
This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data:
Wakeman DG, Henson RN, A multi-subject, multi-modal human neuroimaging dataset, Scientific Data (2015)
Any questions regarding the data, please contact: rik.henson@mrc-cbu.cam.ac.uk
Presentation of the experiment
Experiment
16 subjects (the original version of the dataset included 19 subjects, but 3 were excluded from the group analysis for various reasons)
- 6 acquisition runs of approximately 10mins for each subject
- Presentation of series of images: familiar faces, unfamiliar faces, phase-scrambled faces
- Participants had to judge the left-right symmetry of each stimulus
- Total of nearly 300 trials for each of the 3 conditions
MEG acquisition
Acquisition at 1100Hz with an Elekta-Neuromag VectorView system (simultaneous MEG+EEG).
- Recorded channels (404):
- 102 magnetometers
- 204 planar gradiometers
- 70 EEG electrodes recorded with a nose reference.
MEG data have been "cleaned" using Signal-Space Separation as implemented in MaxFilter 2.2.
- A Polhemus device was used to digitize three fiducial points and a large number of other points across the scalp, which can be used to coregister the M/EEG data with the structural MRI image.
- Stimulation triggers: The triggers related with the visual presentation are saved in the STI101 channel, with the following event codes (bit 3 = face, bit 4 = unfamiliar, bit 5 = scrambled):
- Famous faces: 5 (00101), 6 (00110), 7 (00111)
- Unfamiliar faces: 13 (01101), 14 (01110), 15 (01111)
- Scrambled images: 17 (10001), 18 (10010), 19 (10011)
Delays between the trigger in STI101 and the actual presentation of stimulus: 34.5ms
The data distribution includes MEG noise recordings acquired around the dates of the experiment, processed with MaxFilter 2.2 in the same way as the experimental data.
Subject anatomy
- MRI data acquired on a 3T Siemens TIM Trio: 1x1x1mm T1-weighted structural MRI.
- The face was removed from the strucural images for anonymization purposes.
Processed with FreeSurfer 5.3.
Download and installation [TODO]
First, make sure you have enough space on your hard drive, at least 350Gb:
Raw files: 100Gb
Processed files: 250Gb
The data is hosted on the OpenfMRI website: https://openfmri.org/dataset/ds000117/
- Download the following files for revision 1.0.0 (approximately 70Gb):
FreeSurfer segmentation: NOT AVAILABLE YET [TODO]
Empty room recordings: NOT AVAILABLE YET [TODO]
Unzip all the .zip files in the same folder.
Reminder: Do not save the downloaded files in the Brainstorm folders (program or database folders)MaxFilter/tSSS: The MEG recordings from this study were recorded on an Elekta MEG, and need to be processed with cleaning algorithms SSS or tSSS. Brainstorm currently does not offer any free alternative to Elekta's MaxFilter, therefore in this tutorial we will import the recordings already processed with MaxFilter's tSSS, available in the "derivatives" folder of the BIDS architecture. To save disk space, will not download the raw MEG recordings. This means we will not get any of the additional files available in the BIDS structure (headshape, events, channels) but it doesn't matter because all the information is directly available in the .fif files.
Start Brainstorm (Matlab scripts or stand-alone version). For help, see the Installation page.
Select the menu File > Create new protocol. Name it "TutorialVisual" and select the options:
"No, use individual anatomy",
"No, use one channel file per condition".
Import the anatomy
This dataset is formatted following the BIDS-MEG specifications, therefore we could import all the relevant information (MRI, FreeSurfer segmentation, MEG+EEG recordings) in just one click, with the menu File > Load protocol > Import BIDS dataset, as described in the online tutorial MEG resting state & OMEGA database. However, because your own data might not be organized following the BIDS standards, in this tutorial we preferred illustrating all the detailed steps for importing the data rather than the BIDS shortcut.
This page explains how to import and process the first run of subject #01 only. All the other files will have to be processed in the same way.
- Switch to the "anatomy" view.
Right-click on the TutorialVisual folder > New subject > sub-01
- Leave the default options you defined for the protocol.
Right-click on the subject node > Import anatomy folder:
Set the file format: "FreeSurfer folder"
Select the folder: derivatives/freesurfer/sub-01
- Number of vertices of the cortex surface: 15000 (default value)
When asked to select the anatomical fiducials, click on "Compute MNI transformation". This will register the MRI volume to an MNI template with an affine transformation, using SPM functions embedded in Brainstorm (spm_maff8.m). This will also place default fiducial points NAS/LPA/RPA in the MRI, based on MNI coordinates. We will use head points for the MEG-MRI coregistration, therefore we don't need accurate anatomical positions here.
- Note that if you don't have a good digitized head shape for the subject, or if the final registration MEG-MRI doesn't look good with this head shape, you should not skip this step and mark accurate NAS/LPA/RPA fiducial points in the MRI, using the same anatomical convention that was used during the MRI acquisition.
- Then click on [Save] to start the import.
At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface. Do not worry about the big holes in the head surface, parts of MRI have been remove voluntarily for anonymization purposes.
All the anatomical atlases generated by FreeSurfer were automatically imported: the cortical atlases (Desikan-Killiany, Mindboggle, Destrieux, Brodmann) and the sub-cortical regions (ASEG atlas).
Access the recordings
Link the recordings
We need to attach the continuous .fif files containing the recordings to the database.
- Switch to the "functional data" view.
Right-click on the subject folder > Review raw file.
Select the file format: "MEG/EEG: Neuromag FIFF (*.fif)"
Go to the folder: derivatives/meg-derivatives/sub-01/ses-meg/meg
Select file: sub-01_ses-meg_task-facerecognition_run-01_proc-tsss_meg.fif
Events: Ignore. We will read the stimulus triggers later.
Refine registration now? NO
The head points that are available in the FIF files contain all the points that were digitized during the MEG acquisition, including the ones corresponding to the parts of the face that have been removed from the MRI. If we run the fitting algorithm, all the points around the nose will not match any close points on the head surface, leading to a wrong result. We will first remove the face points and then run the registration manually.
Channel classification
A few non-EEG channels are mixed in with the EEG channels, we need to change this before applying any operation on the EEG channels.
Right-click on the channel file > Edit channel file. Double-click on a cell to edit it.
Change the type of EEG062 to EOG (electrooculogram).
Change the type of EEG063 to ECG (electrocardiogram).
Change the type of EEG061 and EEG064 to NOSIG (or any other non-informative type). Close the window and save the modifications.
MRI registration
At this point, the registration MEG/MRI is based only on the three anatomical landmarks NAS/LPA/RPA, which are not even accurately set (we used default MNI positions). All the MRI scans were anonymized (defaced) so all the head points below the nasion cannot be used. We will try to refine this registration using the additional head points that were digitized above the nasion.
Right-click on the channel file > Digitized head points > Remove points below nasion.
(45 points removed, 92 head points left)
Right-click on the channel file > MRI registration > Refine using head points.
MEG/MRI registration, before (left) and after (right) this automatic registration procedure:
Right-click on the channel file > MRI registration > EEG: Edit.
Click on [Project electrodes on surface], then close the figure to save the modifications.
Read stimulus triggers
We need to read the stimulus markers from the STI channels. The following tasks can be done in an interactive way with menus in the Record tab, as in the introduction tutorials. We will illustrate here how to do this with the pipeline editor, it will be easier to batch it for all the runs and all the subjects.
- In Process1, select the "Link to raw file", click on [Run].
Select process Events > Read from channel, Channel: STI101, Detection mode: Bit.
Do not execute the process, we will add other processes to classify the markers.
- We want to create three categories of events, based on their numerical codes:
Famous faces: 5 (00101), 6 (00110), 7 (00111) => Bit 3 only
Unfamiliar faces: 13 (01101), 14 (01110), 15 (01111) => Bit 3 and 4
Scrambled images: 17 (10001), 18 (10010), 19 (10011) => Bit 5 only
- We will start by creating the category "Unfamiliar" (combination of events "3" and "4") and remove the remove the initial events. Then we just have to rename the remaining "3" in "Famous", and all the "5" in "Scrambled".
Add process Events > Group by name: "Unfamiliar=3,4", Delay=0, Delete original events
Add process Events > Rename event: 3 => Famous
Add process Events > Rename event: 5 => Scrambled
Add process Events > Add time offset to correct for the presentation delays:
Event names: "Famous, Unfamiliar, Scrambled", Time offset = 34.5ms
Finally run the script. Double-click on the recordings to make sure the labels were detected correctly. You can delete the unwanted events that are left in the recordings (1,2,9,13):
Pre-processing
Spectral evaluation
- Keep the "Link to raw file" in Process1.
Run process Frequency > Power spectrum density (Welch) with the options illustrated below.
Right-click on the PSD file > Power spectrum.
The MEG sensors look awful, because of one small segment of data located around 248s. Open the MEG recordings and scroll to 248s (just before the first Unfamiliar event).
In these recordings, the continuous head tracking was activated, but it starts only at the time of the stimulation (248s) while the acquisition of the data starts 20s before (226s). The first 20s do not have head localization coils (HPI) coils activity and are not corrected by MaxFilter. After 248s, the HPI coils are on, and MaxFilter filters them out. The transition between the two states is not smooth and creates important distortions in the spectral domain. For a proper evaluation of the recordings, we should compute the PSD only after the HPI coils are turned on.
Run process Events > Detect cHPI activity (Elekta). This detects the changes in the cHPI activity from channel STI201 and marks all the data without head localization as bad.
Re-run the process Frequency > Power spectrum density (Welch). All the bad segments are excluded from the computation, therefore the PSD is now estimated only with the data after 248s.
- Observations:
- Three groups of sensors, from top to bottom: EEG, MEG gradiometers, MEG magnetometers.
Power lines: 50 Hz and harmonics
- Alpha peak around 10 Hz
Artifacts due to Elekta electronics (HPI coils): 293Hz, 307Hz, 314Hz, 321Hz, 328Hz.
Peak from unknown source at 103.4Hz in the MEG only.
Suspected bad EEG channels: EEG016
- Close all the windows.
Remove line noise
- Keep the "Link to raw file" in Process1.
Select process Pre-process > Notch filter to remove the line noise (50-200Hz).
Add the process Frequency > Power spectrum density (Welch).
Double-click on the PSD for the new continuous file to evaluate the quality of the correction.
- Close all the windows (use the [X] button at the top-right corner of the Brainstorm window).
EEG reference and bad channels
Right-click on link to the processed file ("Raw | notch(50Hz ...") > EEG > Display time series.
Select channel EEG016 and mark it as bad (using the popup menu or pressing the Delete key).
In the Record tab, menu Artifacts > Re-reference EEG > "AVERAGE".
At the end, the window "select active projectors" is open to show the new re-referencing projector. Just close this window. To get it back, use the menu Artifacts > Select active projectors.
Artifact detection
Heartbeats: Detection
Empty the Process1 list (right-click > Clear list).
- Drag and drop the continuous processed file ("Raw | notch(50Hz...)") to the Process1 list.
Run process Events > Detect heartbeats: Channel name=EEG063, All file, Event name=cardiac
Eye blinks: Detection
- In many of the other tutorials, we detect the blinks and remove them with SSP. In this experiment, we are particularly interested in the subject's response to seeing the stimulus. Therefore we will exclude from the analysis all the recordings contaminated with blinks or other eye movements.
Run process Artifacts > Detect events above threshold:
Event name=blink_BAD, Channel=EEG062, All file, Maximum threshold=100, Threshold units=uV, Filter=[0.30,20.00]Hz, Use absolute value of signal.
Inspect visually the two new cateogries of events: cardiac and blink_BAD.
- Close all the windows (using the [X] button).
Heartbeats: Correction with SSP
- Keep the "Raw | notch" file selected in Process1.
Select process Artifacts > SSP: Heartbeats > Sensor type: MEG MAG
Add process Artifacts > SSP: Heartbeats > Sensor type: MEG GRAD - Run the execution
Double-click on the continuous file to show all the MEG sensors.
In the Record tab, select sensors "Left-temporal".Menu Artifacts > Select active projectors.
In category cardiac/MEG MAG: Select component #1 and view topography.
In category cardiac/MEG GRAD: Select component #1 and view topography.
Make sure that selecting the two components removes the cardiac artifact. Then click [Save].
Additional bad segments
Process1>Run: Select the process "Events > Detect other artifacts". This should be done separately for MEG and EEG to avoid confusion about which sensors are involved in the artifact.
Display the MEG sensors. Review the segments that were tagged as artifact, determine if each event represents an artifact and then mark the time of the artifact as BAD. This can be done by selecting the time window around the artifact, then right-click > Reject time segment. Note that this detection process marks 1-second segments but the artifact can be shorter.
- Once all the events in the two categories are reviewed and bad segments are marked, the two categories (1-7Hz and 40-240Hz) can be deleted.
- Do this detection and review again for the EEG.
SQUID jumps
MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps (more information). These sharp steps followed by a change of baseline value are easy to identify visually but more complicated to detect automatically.
The process "Detect other artifacts" usually detects most of them in the category "1-7Hz". If you observe that some are skipped, you can try re-running it with a higher sensitivity. It is important to review all the sensors and all the time in each run to be sure these events are marked as bad segments.
Epoching and averaging
Import epochs
- Keep the "Raw | notch" file selected in Process1.
Select process: Import recordings > Import MEG/EEG: Events (do not run immediately)
Event names "Famous, Unfamiliar, Scrambled", All file, Epoch time=[-500,1200]msAdd process: Pre-process > Remove DC offset: Baseline=[-500,-0.9]ms - Run execution.
Average by run
- In Process1, select all the imported trials.
Run process: Average > Average files: By trial groups (folder average)
Review EEG ERP
EEG evoked response (famous, scrambled, unfamiliar):
Open the Cluster tab and create a cluster with the channel EEG065 (button [NEW IND]).
Select the cluster, select the three average files, right-click > Clusters time series (Overlay:Files).
- Basic observations for EEG065 (right parieto-occipital electrode):
- Around 170ms (N170): greater negative deflection for Famous than Scrambled faces.
- After 250ms: difference between Famous and Unfamiliar faces.
Source estimation
MEG noise covariance: Empty room recordings
The minimum norm model we will use next to estimate the source activity can be improved by modeling the the noise contaminating the data. The section shows how to estimate the noise covariance in different ways for EEG and MEG. For the MEG recordings we will use the empty room measurements we have, and for the EEG we will compute it from the pre-stimulus baselines we have in all the imported epochs.
Create a new subject: emptyroom
Right-click on the new subject > Review raw file.
Select file: sample_group/emptyroom/090707_raw_st.fif
Do not apply default transformation, Ignore event channel.
Select this new file in Process1 and run process Pre-process > Notch filter: 50 100 150 200Hz. When using empty room measurements to compute the noise covariance, they must be processed exactly in the same way as the other recordings.
Right-click on the filtered noise recordings > Noise covariance > Compute from recordings:
Right-click on the Noise covariance > Copy to other subjects
EEG noise covariance: Pre-stimulus baseline
In folder sub002/run_01_sss_notch, select all the imported the imported trials, right-click > Noise covariance > Compute from recordings, Time=[-500,-0.9]ms, EEG only, Merge.
This computes the noise covariance only for EEG, and combines it with the existing MEG information.
BEM layers
We will compute a BEM forward model to estimate the brain sources from the EEG recordings. For this, we need some layers defining the separation between the different tissues of the head (scalp, inner skull, outer skull).
- Go to the anatomy view (first button above the database explorer).
Right-click on the subject folder > Generate BEM surfaces: The number of vertices to use for each layer depends on your computing power and the accuracy you expect. You can try for instance with 1082 vertices (scalp) and 642 vertices (outer skull and inner skull).
Forward model: EEG and MEG
- Go back to the functional view (second button above the database explorer).
Model used: Overlapping spheres for MEG, OpenMEEG BEM for EEG (more information).
In folder sub002/run_01_sss_notch, right-click on the channel file > Compute head model.
Keep all the default options . Expect this to take a while...
Inverse model: Minimum norm estimates
Right-click on the new head model > Compute sources [2016]: MEG MAG + GRAD (default options)
Right-click on the new head model > Compute sources [2016]: EEG (default bad channels).
At the end we have two inverse operators, that are shared for all the files of the run (single trials and averages). If we wanted to look at the run-level source averages, we could normalize the source maps with a Z-score wrt baseline. In this tutorial, we will first average across runs and normalize the subject-level averages. This will be done in the next tutorial (group analysis).
Time-frequency analysis
We will compute the time-frequency decomposition of each trial using Morlet wavelets, and average the power of the Morlet coefficients for each condition and each run separately. We will restrict the computation to the MEG magnetometers and the EEG channels to limit the computation time and disk usage.
- In Process1, select the imported trials "Famous" for run#01.
Run process Frequency > Time-frequency (Morlet wavelets): Sensor types=MEG MAG,EEG
Not normalized, Frequency=Log(6:20:60), Measure=Power, Save average
Double-click on the file to display it. In the Display tab, select the option "Hide edge effects" to exclude form the display all the values that could not be estimated in a reliable way. Let's extract only the good values from this file (-200ms to +900ms).
- In Process1, select the time-frequency file.
Run process Extract > Extract time: Time window=[-200, 900]ms, Overwrite input files
Display the file again, observe that all the possibly bad values are gone.
You can display all the sensors at once (MEG MAG or EEG): right-click > 2D Layout (maps).
- Repeat these steps for other conditions (Scrambled and Unfamiliar) and the other runs (2-6). There is no way with this process to compute all the averages at once, as we did with the process "Average files". This will be easier to run from a script.
- If we wanted to look at the run-level source averages, we could normalize these time-frequency maps. In this tutorial, we will first average across runs and normalize the subject-level averages. This will be done in the next tutorial (group analysis).
Scripting
We have now all the files we need for the group analysis (next tutorial). We need to repeat the same operations for all the runs and all the subjects. Some of these steps are fully automatic and take a lot of time (filtering, computing the forward model), they should be executed from a script.
However, we recommend you always review manually some of the pre-processing steps (selection of the bad segments and bad channels, SSP/ICA components). Do not trust blindly any fully automated cleaning procedure.
For the strict reproducibility of this analysis, we provide a script that processes all the 19 subjects: brainstorm3/toolbox/script/tutorial_visual_single.m (execution time: 10-30 hours)
Report for the first subject: report_TutorialVisual_sub001.html
You should note that this is not the result of a fully automated procedure. The bad channels were identified manually and are defined for each run in the script. The bad segments were detected automatically, confirmed manually for each run and saved in external files distributed with the Brainstorm package sample_group_freesurfer.zip (sample_group/brainstorm/bad_segments/*.mat).
All the process calls (bst_process) were generated automatically using with the script generator (menu Generate .m script in the pipeline editor). Everything else was added manually (loops, bad channels, file copies).