12476
Comment:
|
← Revision 178 as of 2022-06-24 10:23:05 ⇥
30554
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Brainstorm-FieldTrip auditory tutorial = ==== [TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE] ==== ''Authors: Francois Tadel, Elizabeth Bock, Robert Oostenveld.'' The aim of this tutorial is to provide high-quality recordings of a simple auditory stimulation and illustrate the best analysis paths possible with Brainstorm and !FieldTrip. This page presents the workflow in the Brainstorm environment, the equivalent documentation for the !FieldTrip environment will be available on the [[http://fieldtrip.fcdonders.nl/|FieldTrip website]]. Note that the operations used here are not detailed, the goal of this tutorial is not to teach Brainstorm to a new inexperienced user. For in depth explanations of the interface and the theory, please refer to the 12+3 introduction tutorials. <<TableOfContents(2,2)>> == License == This tutorial dataset (MEG and MRI data) remains a property of the MEG Lab, !McConnell Brain Imaging Center, Montreal Neurological Institute, !McGill University, Canada. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from the MEG Lab. If you reference this dataset in your publications, please aknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite Brainstorm as indicated on the [[CiteBrainstorm|website]]. For questions, please contact us through the forum. == Presentation of the experiment == ==== Experiment ==== * One subject * One acquisition run = 200 regular beeps + 40 easy deviant beeps * Two runs of 6 minutes each * Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed * The subject presses a button when detecting a deviant * Auditory stim generated with the Matlab Psychophysics Toolbox ==== MEG acquisition ==== * Acquisition at 2400Hz, with a CTF 275 system at the MNI, subject in seating position * Online 600Hz low-pass filter, files saved with the 3rd order gradient * Recorded Channels: ... * 3 datasets: * S01_AEF_20131218_01: run1 - marked a few trials bad => Use for the main example * S01_AEF_20131218_02: run2 - saccades, see component screenshot, they are not really detected well with the auto detection => To illustrate how to remove saccades. External file of marked saccades * Noise recordings: 30s * File name: S01=Subject01, AEF=Auditory evoked field, 20131218=date(Dec 18 2013), 01=run * Use of the .ds, not the AUX (standard at the MNI) because they are easier to manipulate in !FieldTrip ==== Head shape and fiducial points ==== * 3D digitization using a Polhemus Fastrak device driven by Brainstorm (S01_20131218_*.pos)<<BR>>More information: [[Tutorials/TutDigitize|Digitize EEG electrodes and head shape]] * The output file is copied to each .ds folder and contains the following entries: * The position of the center of CTF coils * The position of the anatomical references we use in Brainstorm: <<BR>>Nasion and connections tragus/helix, as illustrated [[http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Pre-auricular_points_.28LPA.2C_RPA.29|here]]. * Around 150 head points distributed on the hard parts of the head (no soft tissues) ==== Subject anatomy ==== * Subject with 1.5T MRI * Marker on the left cheek * Processed with !FreeSurfer 5.3 |
<<BR>>'''[WARNING: This tutorial is outdated, read the [[Tutorials|introduction tutorials]] instead]'''<<BR>><<BR>><<BR>> = MEG auditory tutorial (CTF) = ''Authors: Francois Tadel, Elizabeth Bock.'' The aim of this tutorial is to provide high-quality recordings of a simple auditory stimulation and illustrate the best analysis paths possible with Brainstorm. It is the same dataset as the one used in the introduction tutorial, but at the full sampling rate (2400Hz). Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the [[http://neuroimage.usc.edu/brainstorm/Tutorials#Get_started|introduction tutorials]]. <<TableOfContents(3,2)>> <<Include(DatasetAuditory, , from="\<\<HTML\(\<!-- START-PAGE --\>\)\>\>", to="\<\<HTML\(\<!-- STOP-SHORT --\>\)\>\>")>> |
Line 50: | Line 16: |
* Go to the [[http://neuroimage.usc.edu/brainstorm3_register/download.php|Download]] page of this website, and download the file: '''sample_auditory.zip ''' * Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder). This is really important that you always keep your original data files in a separate folder: the program folder can be deleted when updating the software, and the contents of the database folder is supposed to be manipulated only by the program itself. |
* Go to the [[http://neuroimage.usc.edu/bst/download.php|Download]] page of this website, and download the file: '''sample_auditory.zip ''' * Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder). This is really important that you always keep your original data files in a separate folder: the program folder can be deleted when updating the software, and the contents of the database folder is supposed to be manipulated only by the program itself. |
Line 53: | Line 20: |
* Select the menu File > Create new protocol. Name it "'''!TutorialAuditory'''" and select the options: | * Select the menu File > Create new protocol. Name it "'''TutorialAuditory'''" and select the options: |
Line 59: | Line 26: |
* Right-click on the !TutorialAuditory folder > New subject > '''Subject01''' | * Right-click on the TutorialAuditory folder > New subject > '''Subject01''' |
Line 62: | Line 29: |
* Set the file format: "!FreeSurfer folder" | * Set the file format: "FreeSurfer folder" |
Line 65: | Line 32: |
* Set the 6 required fiducial points (indicated in MRI coordinates): | * Click on the link "'''Click here to compute MNI normalization'''". * Set the 3 required fiducial points (indicated in MRI coordinates): |
Line 69: | Line 37: |
* AC: x=127, y=119, z=149 * PC: x=128, y=93, z=141 * IH: x=131, y=114, z=206 (anywhere on the midsagittal plane) |
|
Line 73: | Line 38: |
{{attachment:anatomy.gif||height="264",width="335"}} | {{attachment:anatomy.gif||width="335",height="237"}} |
Line 76: | Line 41: |
==== Link the recordings ==== | === Link the recordings === |
Line 80: | Line 45: |
* Select all the .ds folders in: '''sample_auditory/data''' {{attachment:raw1.gif||height="156",width="423"}} * Refine registration now? '''YES'''<<BR>><<BR>> {{attachment:raw2.gif||height="224",width="353"}} ==== Multiple runs and head position ==== * The two AEF runs 01 and 02 were acquired successively, the position of the subject's head in the MEG helmet was estimated twice, once at the beginning of each run. The subject might have moved between the two runs. To evaluate visually the displacement between the two runs, select at the same time all the channel files you want to compare (the ones for run 01 and 02), right-click > Display sensors > MEG.<<BR>><<BR>> {{attachment:raw3.gif||height="220",width="441"}} * Typically, we would like to group the trials coming from multiple runs by experimental conditions. However, because of the subject's movements between runs, it's not possible to directly compare the sensor values between runs because they probably do not capture the brain activity coming from the same regions of the brain. * You have three options if you consider grouping information from multiple runs: * Method 1: Process all the runs separately and average between runs at the source level: The more accurate option, but requires a lot more work, computation time and storage. * Method 2: Ignore movements between runs: This can be acceptable for commodity if the displacements are really minimal, less accurate but much faster to process and easier to manipulate. * Method 3: Co-register properly the runs using the process Standardize > Co-register MEG runs: Can be a good option for displacements under 2cm. Warning: This method has not be been fully evaluated on our side, to use at your own risk. * In this tutorial, we will illustrate only method 1: runs are not co-registered. ==== Epoched vs. continuous ==== * The CTF MEG system can save two types of files: epoched (.ds) or continuous (_AUX.ds). * Here we have an intermediate storage type: continuous recordings saved in an "epoched" file. The file is saved as small blocks of recordings of a constant time length (1 second in this case). All those time blocks are contiguous, there is no gap between them. * Brainstorm can consider this file either as a continuous or an epoched file. By default it imports the regular .ds folders as epoched, but we can change this manually, to process it as a continuous file. * Double-click on the "Link to raw file" for run 01 to view the MEG recordings. You can navigate in the file by blocks of 1s, and switch between blocks using the "Epoch" box in the Record tab. The events listed are relative to the current epoch.<<BR>><<BR>> {{attachment:raw4.gif||height="206",width="575"}} * Right-click on the "Link to raw file" for run 01 > '''Switch epoched/continuous''' * Double-click on the "Link to raw file" again. Now you can navigate in the file without interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.<<BR>><<BR>> {{attachment:raw5.gif||height="209",width="576"}} * Repeat this operation twice to convert all the files to a continuous mode. * '''Run 02''' > Switch epoched/continuous * '''Noise''' > Switch epoched/continuous |
* Select all the .ds folders in: '''sample_auditory/data''' {{attachment:raw1.gif||width="423",height="156"}} * Refine registration now? '''YES'''<<BR>><<BR>> {{attachment:raw2.gif||width="353",height="224"}} === Convert to continuous === * Right-click on the "Link to raw file" for '''Run 01''' > '''Switch epoched/continuous''' * Right-click on the "Link to raw file" for '''Run 02''' > '''Switch epoched/continuous''' * Right-click on the "Link to raw file" for '''Noise '''> '''Switch epoched/continuous''' * Double-click on the "Link to raw file" again. Now you can navigate in the file without interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.<<BR>><<BR>> {{attachment:raw5.gif||width="576",height="209"}} == Stimulation triggers delay == === Evaluation === * Right-click on Run01/Link to raw file > '''Stim '''> Display time series (stimulus channel, UPPT001)<<BR>>Right-click on Run01/Link to raw file > '''ADC V''' > Display time series (audio signal generated, UADC001) * In the Record tab, set the duration of display window to '''0.200s'''.<<BR>>Jump to the third event in the "standard" category. * We can observe that there is a delay of about '''13ms''' between the time where the stimulus trigger is generated by the stimulation computer and the moment where the sound is actually played by the sound card of the stimulation computer ('''delay #1'''). This is matching the documentation of the experiment in the first section of this tutorial. <<BR>><<BR>> {{attachment:stim1.gif||width="590",height="282"}} === Correction === * '''Delay #1''': We can detect the triggers from the analog audio signal (ADC V/UADC001) rather than using the events already detected by the CTF software from the stim channel (Stim/UPPT001). * Drag and drop '''Run01 '''and '''Run02 '''to the Process1 box. * Add __'''twice'''__ the process "'''Events > Detect analog triggers'''".<<BR>>Once with event name="standard_fix" and reference event="standard".<<BR>>Once with event name="deviant_fix" and reference event="deviant".<<BR>>Set the other options as illustrated below:<<BR>><<BR>> {{attachment:stim2.gif}} * Open Run01 (channel ADC V) to evaluate the correction that was performed by this process. If you look at the third trigger in the "standard" category, you can measure a 14.6ms delay between the original event "standard" and the new event "standard_fix".<<BR>><<BR>> {{attachment:stim3.gif||width="570",height="151"}} * Open '''Run01''' to re-organize the event categories: * '''Delete '''the unused event categories: '''standard''', '''deviant'''. * '''Rename '''standard_fix and deviant_fix to '''standard''' and '''deviant'''. * Open '''Run02''' and do the same cleaning operations:<<BR>><<BR>> {{attachment:stim5.gif||width="502",height="149"}} * '''Important note''': We compensated for the jittered delays (delay #1), but not for the other ones (delays #2, #3 and #4). There is still a''' constant 5ms delay''' between the stimulus triggers ("standard" and "deviant") and the time where the sound actually reaches the subject's ears. |
Line 104: | Line 74: |
==== Spectral evaluation ==== | === Spectral evaluation === |
Line 106: | Line 76: |
* Drag all the "Link to raw file" to the Process1 box, or easier, just drag the node "Subject01", it will select recursively all the files in it. * Run the process "Frequency > Power spectrum density (Welch)". Configure it as illustrated in the following figure (window length=10s, overlap=50%).<<BR>><<BR>> {{attachment:psd1.gif||height="326",width="507"}} * Double-click on the new PSD files to display them. <<BR>><<BR>> {{attachment:psd2.gif||height="175",width="416"}} * We can observe 3 clear peaks related with the power lines: 60Hz, 120Hz, 180Hz * The decrease of power after 600Hz corresponds to the low-pass filter applied at the acquisition time. ==== Power line contamination ==== * Put all the "Link to raw file" into the Process1 box (or the Subject01 node) * Run the process: Pre-process > Sinusoid removal (notch) * Select the frequencies: 60, 120, 180 Hz * This will take a while... * In output, this process creates new .ds folders in the same folder as the original files, and links the new files to the database.<<BR>><<BR>> {{attachment:psd3.gif} * Run again the PSD on those new files to evaluate the quality of the correction<<BR>><<BR>>=> SCREEN CAPTURE: psd4.gif * To avoid the confusion, delete the links to original files: select the folders containing the original files (not the new ones) and press the Delete key.<<BR>><<BR>> {{attachment:psd4.gif}} ==== Cardiac artifact ==== cardiac SSP only 11% but I would use it, see MLT sensors, see component screenshot run1 - marked a few trials bad run2 - saccades, see component screenshot, they are not really detected well with the auto detection |
* Drag '''ALL''' the "Link to raw file" to the Process1 box, or easier, just drag the node "Subject01", it will select recursively all the files in it. * Run the process "'''Frequency > Power spectrum density (Welch)'''": * Time window: '''[All file]''', Window length: '''4s''', Overlap: '''50%''', Sensor types: '''MEG''' * Note that you need at least 8Gb of RAM to run the PSD on the entire file. If you don't or if you get "Out of memory" errors, you can try running the PSD on a shorter time window. * Click on '''[Edit]''' and select option "'''Save individual PSD values''' (for each trial)".<<BR>><<BR>> {{attachment:psd1.gif||width="615",height="394"}} * Double-click on the new PSD files to display them. * Observations for '''Run''''''01''': <<BR>><<BR>> {{attachment:psd_eval01.gif||width="394",height="207"}} * Peaks related with the power lines: '''60Hz, 120Hz, 180Hz '''(240Hz and 300Hz could be observed as well depending on the window length used for the PSD) * The drop after '''600Hz''' corresponds to the low-pass filter applied at the acquisition time. * One channel indicates a higher noise than the others in high frequencies: '''MLO52''' (in red).<<BR>>We will probably mark it as bad later, when reviewing the recordings. * Observations for '''Run02''': <<BR>><<BR>> {{attachment:psd_eval02.gif||width="394",height="208"}} <<BR>> {{attachment:psd_eval02_zoom.gif||width="595",height="208"}} * Same peaks related with the power lines: '''60Hz, 120Hz, 180Hz''' * Same drop after '''600Hz'''. * Same noisy channel: '''MLO52'''. * Additionally, we observe higher level of noise in frequencies in the range of 30Hz to 100Hz on '''many occipital sensors'''. This is probably due to some tension in the neck due to an uncomfortable position. We will see later whether these channels need to be tagged as bad. === Power line contamination === * Put '''ALL''' the "Link to raw file" into the Process1 box (or directly the Subject01 folder) * Run the process: '''Pre-process > Notch filter''' * Select the frequencies: '''60, 120, 180 Hz''' * Sensor types or names: '''MEG''' * The higher harmonics are too high to bother us in this analysis, plus they are not clearly visible in all the recordings. * In output, this process creates new .ds folders in the same folder as the original files, and links the new files to the database.<<BR>><<BR>> {{attachment:psd3.gif}} * Run again the PSD process "'''Frequency > Power spectrum density (Welch)'''" on these new files, with the same parameters, to evaluate the quality of the correction. * Double-click on the new PSD files to open them.<<BR>><<BR>> {{attachment:psd5.gif||width="416",height="177"}} * Zoom in with the mouse wheel to observe what is happening around 60Hz (before / after).<<BR>><<BR>> {{attachment:psd6.gif||width="453",height="143"}} * To avoid the confusion later, delete the links to the original files: Select the folders containing the original unfiltered files and press the Delete key (or right-click > File > Delete).<<BR>><<BR>> {{attachment:psd7.gif||width="356",height="184"}} === Heartbeats and eye blinks === * Select the two AEF runs in the Process1 box. * Select successively the following processes, then click on [Run]: * '''Events > Detect heartbeats:''' Select channel '''ECG''', check "All file", event name "cardiac". * '''Events > Detect eye blinks:''' Select channel '''VEOG''', check "All file", event name "blink". * '''Events > Remove simultaneous''': Remove "'''cardiac'''", too close to "'''blink'''", delay '''250ms'''. * '''Compute SSP: Heartbeats''': Event name "cardiac", sensors="MEG", '''do not use existing SSP'''. * '''Compute SSP: Eye blinks''': Event name "blink", sensors="MEG", '''do not use existing SSP'''.<<BR>><<BR>> {{attachment:ssp_pipeline.gif||width="527",height="359"}} * Double-click on '''Run01 '''to open the MEG.<<BR>>You can change the color of events "standard" and "deviant" to make the figure more readable. * Review the '''EOG '''and '''ECG '''channels and make sure the events detected make sense.<<BR>><<BR>> {{attachment:events.gif||width="557",height="369"}} * In the Record tab, menu SSP > '''Select active projectors'''. * Blink: The first component is selected and looks good (display its topography to validate). * Cardiac: The first component is selected and looks good (display its topography to validate). * Click on [Save].<<BR>><<BR>> {{attachment:ssp_result.gif||width="700",height="172"}} * Repeat the same operations for '''Run02''': * Review the events. * Select the first cardiac component. === Bad segments === * At this point, you should review the entire files, by pages of a few seconds scrolling with the F3 key, to identify all the bad channels and the noisy segments of recordings. Do this with the the EOG channel open at the same time to identify saccades or blinks that were not completely corrected with the SSP projectors. As this is a complicated task that requires some expertise, we have prepared a list of bad segments for these datasets. * Open '''Run01'''. In the Record tab, select '''File > Add events from file''': * File name: sample_auditory/data/S01_AEF_20131218_01_notch/'''events_bad_01.mat ''' * File type: Brainstorm (events*.mat) * It adds '''12 bad segments''' to the file. * Open '''Run02'''. In the Record tab, select '''File > Add events from file''': * File name: sample_auditory/data/S01_AEF_20131218_02_notch/'''events_bad_02.mat ''' * File type: Brainstorm (events*.mat) * It adds '''9 bad segments''' and '''16 saccades''' to the file. === Saccades === * Run02 contains a few saccades that generate a large amount of noise in the MEG recordings. They are not identified well by the automatic detection process based on the horizontal EOG. We have marked some of them, you have already loaded these events together with the bad segments. We are going to use again the SSP technique to remove the spatial components associated with these saccades. * Open the MEG recordings for '''Run02''' and select the right-frontal sensors (Record tab > CTF RF). * In the Record tab, menu SSP > Compute SSP: Generic<<BR>>Event name='''saccade''', Time='''[0,500]ms''', Frequency='''[1,15]Hz''', '''Use existing SSP'''<<BR>><<BR>> {{attachment:ssp_saccade_process.gif||width="329",height="442"}} * Example of saccade without correction:<<BR>><<BR>> {{attachment:ssp_saccade_before.gif||width="711",height="259"}} * With the first component of saccade SSP applied: <<BR>><<BR>> {{attachment:ssp_saccade_after.gif||width="507",height="283"}} * This first component removes really well the saccade, keep it selected and click on [Save]. === Bad channels === * During the visual exploration, some channels appeared generally noisier than the others. Example: <<BR>><<BR>> {{attachment:badchannel02.gif||width="456",height="200"}} * Right-click on '''Run01 '''> Good/bad channels > Mark some channels as bad<<BR>> > '''MRT51, MLO52''' * Right-click on '''Run02 '''> Good/bad channels > Mark some channels as bad<<BR>> > '''MRT51, ''''''MLO52, MLO42, MLO43''' |
Line 129: | Line 150: |
import standard and deviant: [-100,500]ms, remove DC with [-100,0]ms baseline . do not make new condition - keep trials separate according to run average trial types see screen shot of MEG and EEG - P300 can be seen here even with only 40 epochs in the average |
=== Import recordings === To import epochs from '''Run01''': * Right-click on the "Link to raw file" > '''Import in database''' * Use events: "'''standard'''" and "'''deviant'''" * Epoch time: '''[-100, +500] ms''' * Apply the existing SSP (make sure that you have 2 selected projectors) * '''Remove DC''' '''offset '''based on time window: '''[-100, 0] ms''' * '''UNCHECK''' the option "Create a separate folder for each epoch type", this way all the epochs are going to be saved in the same Run01 folder, and we will able to separate the trials from Run01 and Run02.<<BR>><<BR>> {{attachment:import1.gif||width="521",height="344"}} * Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot).<<BR>><<BR>> {{attachment:import2.gif}} Repeat the same operation for '''Run02''': * Right-click on the "Link to raw file" > '''Import in database''' * Use the same options as for the previous run.<<BR>><<BR>> {{attachment:import3.gif||width="216",height="235"}} === Average responses === * As said previously, it is usually not recommended to average recordings in sensor space across multiple acquisition runs because the subject might have moved between the sessions. Different head positions were recorded for each run, we will reconstruct the sources separately for each each run to take into account these movements. * However, in the case of event-related studies it makes sense to start our data exploration with an average across runs, just to evaluate the quality of the evoked responses. We have seen that the subject almost didn't move between the two runs, so the error would be minimal. We will compute now an approximate sensor average between runs, and we will run a more formal average in source space later. * We have 80 good "deviant" trials that we want to average together. * Select the trial groups "deviant" from both runs in Process1, run process "'''Average > Average files'''"<<BR>>Select the option "'''By trial group (subject average)'''"<<BR>><<BR>> {{attachment:process_average_data1.gif||width="470",height="456"}} * To compare properly this "deviant" average with the other condition, we need to use the same number of trials in the "standard" condition. We are going to pick 40 "standard" trials from Run01 and 40 from Run02. To make it easy, let's take the 40 first good trials. * Select the '''41 '''first "standard" trials of Run01 + the '''41 '''first "standard" trials of Run02 in Process1.<<BR>>This will sum to '''80 '''selected files, because the Process1 tab ignores the bad trials (trial #37 is bad in Run01, trial #36 is bad in Run02) * Run again process "'''Average > Average files'''" > "'''By trial group (subject average)'''"<<BR>><<BR>> {{attachment:process_average_data2.gif||width="471",height="444"}} * The average for the two conditions "standard" and "deviant" are saved in the folder ''(intra-subject)''. The channel file added to this folder is an average of the channel files from Run01 and Run02.<<BR>><<BR>> {{attachment:average_sensor_files.gif||width="220",height="205"}} === Visual exploration === * Display the two averages, "standard" and "deviant": * Right-click on average > MEG > Display time series * Right-click on average > MISC > Display time series (EEG electrodes Cz and Pz) * Right-click on average > MEG > 2D Sensor cap * In the Filter tab, add a '''low-pass filter''' at '''100Hz'''. * Right-click on the 2D topography figures > Snapshot > Time contact sheet. * Here are results for the standard (top) and deviant (bottom) beeps:<<BR>><<BR>> {{attachment:average_sensor.gif||width="703",height="399"}} * '''P50''': 50ms, bilateral auditory response in both conditions. * '''N100''': 95ms, bilateral auditory response in both conditions. * '''MMN''': 100-200ms, mismatch negativity in the deviant condition only (detection of deviant). * '''P200''': 170ms, in both conditions but much stronger in the standard condition. * '''P300''': 300-400ms, deviant condition only (decision making in preparation of the button press). * '''Standard '''(right-click on the topography figure > Snapshot > Time contact sheet) : <<BR>><<BR>> {{attachment:average_sensor_standard.gif||width="370",height="286"}} * '''Deviant''': <<BR>><<BR>> {{attachment:average_sensor_deviant.gif||width="371",height="300"}} === Difference deviant-standard === * In the Process2 tab, select the deviant average (Files A) and the standard average (Files B). * Run the process "'''Other > Difference A-B'''"<<BR>><<BR>> {{attachment:average_sensor_diff.gif||width="340",height="293"}} * The difference deviant-standard does not show anymore the early responses (P50, P100) but emphasizes the difference in the later process (MMN/P200 and P300). <<BR>><<BR>> {{attachment:average_sensor_diff2.gif||width="456",height="214"}} |
Line 134: | Line 199: |
==== Head model ==== headmodel overlapping spheres, used cortex 15000 vertices ==== Noise covariance matrix ==== Noise recordings ==== Inverse model ==== sources min norm z-score with baseline [-100,0] sec, abs of sources ==== Regions of interest ==== scouts over primary auditory cortex (A1) see screen shots == Time-frequency == TF over scout 1, z-scored [-100,0] ms see screen shots interesting gamma in the deviant == Coherence == * Run1, sensor magnitude-square coherence 1xN * UADC001 vs. other sensors * Time window: [0,150]ms * Sensor types * Remove evoked response: NOT SELECTED * Maximum frequency resolution: 1Hz * Highest frequency of interest: 80Hz * Metric significativity: 0.05 * Concatenate input files: SELECTED * => Coh(0.6Hz,32win) * Run1, source magnitude-square coherence 1xN * 50 standard trials * Highest frequency of interest: 40Hz * Same other inputs == Discussion == Discussion about the choice of the dataset on the !FieldTrip bug tracker:<<BR>>http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300 |
=== Head model === * Select the two imported folders at once, right-click > Compute head model<<BR>><<BR>> {{attachment:headmodel1.gif||width="413",height="221"}} * Use the '''overlapping spheres''' model and keep all of the options at their default values.<<BR>><<BR>> {{attachment:headmodel2.gif||width="209",height="234"}} {{attachment:headmodel3.gif||width="244",height="208"}} * For more information: [[Tutorials/TutHeadModel|Head model tutorial]]. === Noise covariance matrix === * We want to calculate the noise covariance from the empty room measurements and use it for the other runs. * In the '''Noise''' folder, right-click on the Link to raw file > Noise covariance > Compute from recordings.<<BR>><<BR>> {{attachment:noisecov1.gif||width="392",height="253"}} * Keep all the default options and click [OK]. <<BR>><<BR>> {{attachment:noisecov2.gif||width="291",height="285"}} * Right-click on the noise covariance file > Copy to other conditions.<<BR>><<BR>> {{attachment:noisecov3.gif||width="232",height="181"}} * You can double-click on one the copied noise covariance files to check what it looks like:<<BR>><<BR>> {{attachment:noisecov4.gif||width="201",height="232"}} * For more information: [[Tutorials/TutNoiseCov|Noise covariance tutorial]]. === Inverse model === * Select the two imported folders at once, right-click > Compute sources<<BR>><<BR>> {{attachment:inverse1.gif||width="282",height="192"}} * Select '''dSPM''' and keep all the default options.<<BR>><<BR>> {{attachment:inverse2.gif||width="199",height="205"}} * Then you are asked to confirm the list of bad channels to use in the source estimation for each run. Just leave the defaults, which are the channels that we set as bad earlier.<<BR>><<BR>> {{attachment:inverse_bad1.gif||width="326",height="202"}} * One inverse operator is created in each condition, with one link per data file.<<BR>><<BR>> {{attachment:inverse3.gif||width="202",height="240"}} * For more information: [[Tutorials/TutSourceEstimation|Source estimation tutorial]]. === Average in source space === * Now we have the source maps available for all the trials, we average them in source space. * Select the folders for '''Run01 '''and '''Run02 '''and the ['''Process sources'''] button on the left. * Run process "'''Average > Average files'''":<<BR>>Select "'''By trial group (subject average)'''"<<BR>><<BR>> {{attachment:process_average_results.gif||width="425",height="376"}} * Double-click on the source averages to display them (standard=top, deviant=bottom).<<BR>><<BR>> {{attachment:average_source.gif||width="780",height="321"}} * Note that opening the source maps can be very long because of the filters for visualization. Check in the Filter tab, you probably still have a '''100Hz low-pass filter''' applied for the visualization. In the case of averaged source maps, the 15000 source signals are filtered on the fly when you load a source file. This can take a significant amount of time. You may consider unchecking this option if the display is too slow on your computer. * '''Standard:''' (Right-click on the 3D figures > Snapshot > Time contact sheet)<<BR>><<BR>> {{attachment:average_source_standard_left.gif||width="486",height="263"}} <<BR>> {{attachment:average_source_standard_right.gif||width="486",height="263"}} * '''Deviant:'''<<BR>><<BR>> {{attachment:average_source_deviant_left.gif||width="486",height="263"}} <<BR>> {{attachment:average_source_deviant_right.gif||width="486",height="263"}} * '''Movies''': Right-click on any figure > Snapshot > '''Movie (time): All figures''' (click to download video)<<BR>><<BR>>[[http://neuroimage.usc.edu/wikidocs/average_sources.avi|{{attachment:average_source_video.gif|http://neuroimage.usc.edu/brainstorm/wikidocs/average_sources.avi|width="484",height="258"}}]] === Difference deviant-standard === * In the Process2 tab, select the average of the deviant sources (Files A) and the average of the standard sources (Files B). * Run the process "'''Other > Difference A-B'''"<<BR>><<BR>> {{attachment:average_source_diff.gif||width="424",height="347"}} * Double-click on the difference to display it, explore it in time.<<BR>><<BR>> {{attachment:average_source_diff_left.gif||width="455",height="252"}} <<BR>> {{attachment:average_source_diff_right.gif||width="455",height="252"}} * The first observations we can make are the following: * '''P50''': No important difference. * '''N100''': Stronger response in the right auditory system for the deviant condition. * '''MMN '''(125ms): Stronger response for the deviant (left auditory, right temporal/frontal/motor). * '''P200 '''(175ms): Stronger response in the auditory system for the standard condition. * '''After 200ms''': Stronger response in the deviant condition (left auditory, left motor, right auditory, right temporal, right motor, right parietal) * Alternatively, you could calculate the difference of the average of all the "deviant" (80) and all the "standard" (388) trials, using the process "'''Other > Weighted difference'''". It is an attempt to compensate for the difference of number of trials. === Student's t-test === * Using a t-test instead of the difference of the two averages, you can reproduce similar results but with a significance level attached to each value. With this test, we can also use all the trials we have, unlike the difference of the means: The t-test behaves very well with imbalanced designs, we can keep all the standard trials. * In the Process2 tab, select the following files: * Files A: All the deviant trials, with the '''[Process sources]''' button selected. * Files B: All the standard trials, with the '''[Process sources]''' button selected. * Run the process "'''Test > Student's t-test'''", Equal variance, Absolute value of average.<<BR>><<BR>> {{attachment:ttest_source1.gif||width="699",height="406"}} * Double-click on the t-test file to open it. Set the options in the Stat tab: * p-value threshold: '''0.05''' * Multiple comparisons: '''FDR''' * Control over dimensions: '''1.Signals''' and '''2.Time''' * Explore the results in time.<<BR>><<BR>> {{attachment:ttest_source2_left.gif||width="415",height="231"}} <<BR>> {{attachment:ttest_source2_right.gif||width="415",height="231"}} == Regions of interest == === Manual tracing === * Let's place all the regions of interest starting from the easiest to identify. * Open the average source files (standard and deviant), together with the average recordings for the standard condition for an easier time navigation. * In the Surface tab, smooth the cortical surface at '''70%'''. * For each region: go to the indicated time point, adjust the amplitude threshold in the Surface tab, identify the area of interest, click on its center, grow the scout, rename it. * Grow all the regions to the same size: '''20 vertices'''. * Note that all the following screen captures are produced with a low-pass filter at 100Hz. * '''A1L''': Left primary auditory cortex (Heschl gyrus) * The most visible region in both conditions. Active during all the main steps of the auditory processing: P50, N100, MMN, P200, P300. * '''Standard '''condition, t='''90ms''', amplitude threshold='''70%'''<<BR>><<BR>> {{attachment:scout_a1l.gif||width="650",height="162"}} * '''A1R''': Right primary auditory cortex (Heschl gyrus) * The position of this region is a lot less obvious than A1L, we don't see one focal region with a sustained activity. These binaural auditory stimulations should be generating similar bilateral responses in both left and right auditory cortices at early latencies. Possible explanations for this observation: * The earplug was not adjusted on the right side and the sound was not well delivered. * The subject's hearing from the right ear is impaired. * The response is actually stronger in the left auditory cortex for this subject. * The orientation of the source makes it difficult to capture for the MEG sensors. * We are trying to find a region that peaks at the same time as A1L (95ms and 200ms in the standard condition). It is very difficult to find anything that behaves this way in both the deviant and the standard condition, so we will pick something very approximate, knowing that we cannot really rely on this region. The auditory system is very dynamic, in squared centimeters of cortex we can observe many functionally independent regions activated at different moments. * '''Deviant '''condition, t='''34ms''', amplitude threshold='''20%''' * Results are ok for the deviant condition but not so good for the standard condition. <<BR>><<BR>> {{attachment:scout_a1r.gif||width="648",height="162"}} * '''IFGL''': Left inferior frontal gyrus (Brodmann area 44) * Involved in the auditory processing, particularly while processing irregularities. * You can use the atlas "Brodmann-thresh" available in the Scout tab for identifying this region. * '''Deviant '''condition, t='''140ms''', amplitude threshold='''40% '''<<BR>><<BR>> {{attachment:scout_ifgl.gif||width="648",height="162"}} * '''IFGR''': Right inferior frontal gyrus (Brodmann area 44''')''' * Expected to have an activity similar to the left IFG. * '''Deviant '''condition, t='''110ms''', amplitude threshold='''40% '''<<BR>><<BR>> {{attachment:scout_ifgr.gif||width="648",height="162"}} * '''M1L''': Left motor cortex * The subject taps with the right index when a deviant is presented. * The motor cortex responds at very early latencies together with the auditory cortex, in both conditions (50ms and 100ms). The subject is ready for a fast response to the task. * At 175ms, the peak in the standard condition probably corresponds to an inhibition: the sound heard is not a deviant, there is no further motor processing required. * At 225ms, the peak in the deviant condition is probably a motor preparation. At 350ms, the motor task begins, the subject moves the right hand (recorded reaction times 500ms +/- 108ms). * '''Deviant '''condition, t='''240ms''', amplitude threshold='''50% '''<<BR>><<BR>> {{attachment:scout_m1l.gif||width="648",height="162"}} * '''M1R''': Right motor cortex * Probably involved in the preparation of the motor response as well. Less recruited during the actual motor command. * '''Deviant '''condition, t='''35ms''', amplitude threshold='''25%'''''' '''<<BR>><<BR>> {{attachment:scout_m1r.gif||width="648",height="162"}} * '''PPCR''': Right posterior parietal cortex * Known to play a role as a relay in the auditory processing. * '''Deviant '''condition, t='''225ms''', amplitude threshold='''60%'''''' '''<<BR>><<BR>> {{attachment:scout_ppcr.gif||width="652",height="163"}} === Influence of the number of trials === * We have decided to run the source analysis on the same number of trials for both conditions. We have been working so far with an average of the standard condition calculated from 80 trials. Just out of curiosity, we can recalculate another average with all the good standard trials (388). * Here are the scouts traces for both averages (80 trials in green, 388 trials in red):<<BR>><<BR>> {{attachment:scouts_ntrials.gif||width="607",height="266"}} * As expected, the signal is cleaner in the average with more trials, but it is interesting to note that the overall shape of the traces does not change. The main effects observed are similar, the latencies are identical, multiplying the number of trials by five does not change much the interpretation. |
Line 175: | Line 299: |
==== Process selection ==== ==== Graphic edition ==== ==== Generate Matlab script ==== This list of processes can be converted automatically to a Matlab script. . See the results on the page: == Feedback == |
The operations described in this tutorial can be reproduced from a Matlab script, available in the Brainstorm distribution: '''brainstorm3/toolbox/script/tutorial_auditory.m ''' |
[WARNING: This tutorial is outdated, read the introduction tutorials instead]
MEG auditory tutorial (CTF)
Authors: Francois Tadel, Elizabeth Bock.
The aim of this tutorial is to provide high-quality recordings of a simple auditory stimulation and illustrate the best analysis paths possible with Brainstorm. It is the same dataset as the one used in the introduction tutorial, but at the full sampling rate (2400Hz).
Note that the operations used here are not detailed, the goal of this tutorial is not to introduce Brainstorm to new users. For in-depth explanations of the interface and theoretical foundations, please refer to the introduction tutorials.
Contents
License
This tutorial dataset (MEG and MRI data) remains a property of the MEG Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from the MEG Lab.
If you reference this dataset in your publications, please aknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite Brainstorm as indicated on the website. For questions, please contact us through the forum.
Presentation of the experiment
Experiment
- One subject, two acquisition runs of 6 minutes each
- Subject stimulated binaurally with intra-aural earphones (air tubes+transducers)
- Each run contains:
- 200 regular beeps (440Hz)
- 40 easy deviant beeps (554.4Hz, 4 semitones higher)
- Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed
- The subject presses a button when detecting a deviant with the right index finger
- Auditory stimuli generated with the Matlab Psychophysics toolbox
The specifications of this dataset were discussed initially on the FieldTrip bug tracker:
http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300
MEG acquisition
Acquisition at 2400Hz, with a CTF 275 system, subject in sitting position
- Recorded at the Montreal Neurological Institute in December 2013
- Anti-aliasing low-pass filter at 600Hz, files saved with the 3rd order gradient
- Recorded channels (340):
- 1 Stim channel indicating the presentation times of the audio stimuli: UPPT001 (#1)
- 1 Audio signal sent to the subject: UADC001 (#316)
- 1 Response channel recordings the finger taps in response to the deviants: UDIO001 (#2)
- 26 MEG reference sensors (#5-#30)
- 274 MEG axial gradiometers (#31-#304)
- 2 EEG electrodes: Cz, Pz (#305 and #306)
- 1 ECG bipolar (#307)
- 2 EOG bipolar (vertical #308, horizontal #309)
- 12 Head tracking channels: Nasion XYZ, Left XYZ, Right XYZ, Error N/L/R (#317-#328)
- 20 Unused channels (#3, #4, #310-#315, #329-340)
- 3 datasets:
S01_AEF_20131218_01.ds: Run #1, 360s, 200 standard + 40 deviants
S01_AEF_20131218_02.ds: Run #2, 360s, 200 standard + 40 deviants
S01_Noise_20131218_01.ds: Empty room recordings, 30s long
- File name: S01=Subject01, AEF=Auditory evoked field, 20131218=date(Dec 18 2013), 01=run
- Average reaction times for the button press after a deviant tone:
Run #1: 515ms +/- 108ms
Run #2: 596ms +/- 134ms
Use of the .ds, not the AUX (standard at the MNI) because they are easier to manipulate in FieldTrip
Stimulation delays
Delay #1: Production of the sound.
Between the stim markers (channel UDIO001) and the moment when the sound card plays the sound (channel UADC001). This is mostly due to the software running on the computer (stimulation software, operating system, sound card drivers, sound card electronics). The delay can be measured from the recorded files by comparing the triggers in the two channels: Delay between 11.5ms and 12.8ms (std = 0.3ms) This delay is not constant, we will need to correct for it.Delay #2: Transmission of the sound.
Between when the sound card plays the sound and when the subject receives the sound in the ears. This is the time it takes for the transducer to convert the analog audio signal into a sound, plus the time it takes the sound to travel through the air tubes from the transducer to the subject's ears. This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. Delay between 4.8ms and 5.0ms (std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered constant, we will not compensate for it.Delay #3: Recording of the signals.
The CTF MEG systems have a constant delay of 4 samples between the MEG/EEG channels and the analog channels (such as the audio signal UADC001), because of an anti-aliasing filter that is applied to the first and not the second. This translate here to a constant delay of 1.7ms.Delay #4: Over-compensation of delay #1.
When correcting of delay #1, the process we use to detect the beginning of the triggers on the audio signal (UADC001) sets the trigger in the middle of the ramp between silence and the beep. We "over-compensate" the delay #1 by 1.7ms. This can be considered as constant delay of about -1.7ms.Uncorrected delays: We will correct for the delay #1, and keep the other delays (#2, #3 and #4). After we compensate for delay #1 our MEG signals will have a constant delay of about 4.9 + 1.7 - 1.7 = 4.9 ms. We decide not to compensate for these delays because they do not introduce any jitter in the responses and they are not going to change anything in the interpretation of the data.
Head shape and fiducial points
3D digitization using a Polhemus Fastrak device driven by Brainstorm (S01_20131218_*.pos)
More information: Digitize EEG electrodes and head shape
- The output file is copied to each .ds folder and contains the following entries:
- The position of the center of CTF coils
The position of the anatomical references we use in Brainstorm:
Nasion and connections tragus/helix, as illustrated here.
- Around 150 head points distributed on the hard parts of the head (no soft tissues)
Subject anatomy
- Subject with 1.5T MRI
- Marker on the left cheek
Processed with FreeSurfer 5.3
Download and installation
Requirements: You have already followed all the basic tutorials and you have a working copy of Brainstorm installed on your computer.
Go to the Download page of this website, and download the file: sample_auditory.zip
- Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder). This is really important that you always keep your original data files in a separate folder: the program folder can be deleted when updating the software, and the contents of the database folder is supposed to be manipulated only by the program itself.
- Start Brainstorm (Matlab scripts or stand-alone version)
Select the menu File > Create new protocol. Name it "TutorialAuditory" and select the options:
"No, use individual anatomy",
"No, use one channel file per condition".
Import the anatomy
- Switch to the "anatomy" view.
Right-click on the TutorialAuditory folder > New subject > Subject01
- Leave the default options you set for the protocol
Right-click on the subject node > Import anatomy folder:
Set the file format: "FreeSurfer folder"
Select the folder: sample_auditory/anatomy
- Number of vertices of the cortex surface: 15000 (default value)
Click on the link "Click here to compute MNI normalization".
- Set the 3 required fiducial points (indicated in MRI coordinates):
- NAS: x=127, y=213, z=139
- LPA: x=52, y=113, z=96
- RPA: x=202, y=113, z=91
- At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface.
Access the recordings
Link the recordings
- Switch to the "functional data" view.
Right-click on the subject folder > Review raw file
Select the file format: "MEG/EEG: CTF (*.ds...)"
Select all the .ds folders in: sample_auditory/data
Refine registration now? YES
Convert to continuous
Right-click on the "Link to raw file" for Run 01 > Switch epoched/continuous
Right-click on the "Link to raw file" for Run 02 > Switch epoched/continuous
Right-click on the "Link to raw file" for Noise > Switch epoched/continuous
Double-click on the "Link to raw file" again. Now you can navigate in the file without interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.
Stimulation triggers delay
Evaluation
Right-click on Run01/Link to raw file > Stim > Display time series (stimulus channel, UPPT001)
Right-click on Run01/Link to raw file > ADC V > Display time series (audio signal generated, UADC001)In the Record tab, set the duration of display window to 0.200s.
Jump to the third event in the "standard" category.We can observe that there is a delay of about 13ms between the time where the stimulus trigger is generated by the stimulation computer and the moment where the sound is actually played by the sound card of the stimulation computer (delay #1). This is matching the documentation of the experiment in the first section of this tutorial.
Correction
Delay #1: We can detect the triggers from the analog audio signal (ADC V/UADC001) rather than using the events already detected by the CTF software from the stim channel (Stim/UPPT001).
Drag and drop Run01 and Run02 to the Process1 box.
Add twice the process "Events > Detect analog triggers".
Once with event name="standard_fix" and reference event="standard".
Once with event name="deviant_fix" and reference event="deviant".
Set the other options as illustrated below:
Open Run01 (channel ADC V) to evaluate the correction that was performed by this process. If you look at the third trigger in the "standard" category, you can measure a 14.6ms delay between the original event "standard" and the new event "standard_fix".
Open Run01 to re-organize the event categories:
Delete the unused event categories: standard, deviant.
Rename standard_fix and deviant_fix to standard and deviant.
Open Run02 and do the same cleaning operations:
Important note: We compensated for the jittered delays (delay #1), but not for the other ones (delays #2, #3 and #4). There is still a constant 5ms delay between the stimulus triggers ("standard" and "deviant") and the time where the sound actually reaches the subject's ears.
Detect and remove artifacts
Spectral evaluation
- One of the typical pre-processing steps consist in getting rid of the contamination due to the power lines (50 Hz or 60Hz). Let's start with the spectral evaluation of this file.
Drag ALL the "Link to raw file" to the Process1 box, or easier, just drag the node "Subject01", it will select recursively all the files in it.
Run the process "Frequency > Power spectrum density (Welch)":
Time window: [All file], Window length: 4s, Overlap: 50%, Sensor types: MEG
- Note that you need at least 8Gb of RAM to run the PSD on the entire file. If you don't or if you get "Out of memory" errors, you can try running the PSD on a shorter time window.
Click on [Edit] and select option "Save individual PSD values (for each trial)".
- Double-click on the new PSD files to display them.
Observations for Run01:
Peaks related with the power lines: 60Hz, 120Hz, 180Hz (240Hz and 300Hz could be observed as well depending on the window length used for the PSD)
The drop after 600Hz corresponds to the low-pass filter applied at the acquisition time.
One channel indicates a higher noise than the others in high frequencies: MLO52 (in red).
We will probably mark it as bad later, when reviewing the recordings.
Observations for Run02:
Same peaks related with the power lines: 60Hz, 120Hz, 180Hz
Same drop after 600Hz.
Same noisy channel: MLO52.
Additionally, we observe higher level of noise in frequencies in the range of 30Hz to 100Hz on many occipital sensors. This is probably due to some tension in the neck due to an uncomfortable position. We will see later whether these channels need to be tagged as bad.
Power line contamination
Put ALL the "Link to raw file" into the Process1 box (or directly the Subject01 folder)
Run the process: Pre-process > Notch filter
Select the frequencies: 60, 120, 180 Hz
Sensor types or names: MEG
- The higher harmonics are too high to bother us in this analysis, plus they are not clearly visible in all the recordings.
In output, this process creates new .ds folders in the same folder as the original files, and links the new files to the database.
Run again the PSD process "Frequency > Power spectrum density (Welch)" on these new files, with the same parameters, to evaluate the quality of the correction.
Double-click on the new PSD files to open them.
Zoom in with the mouse wheel to observe what is happening around 60Hz (before / after).
To avoid the confusion later, delete the links to the original files: Select the folders containing the original unfiltered files and press the Delete key (or right-click > File > Delete).
Heartbeats and eye blinks
- Select the two AEF runs in the Process1 box.
- Select successively the following processes, then click on [Run]:
Events > Detect heartbeats: Select channel ECG, check "All file", event name "cardiac".
Events > Detect eye blinks: Select channel VEOG, check "All file", event name "blink".
Events > Remove simultaneous: Remove "cardiac", too close to "blink", delay 250ms.
Compute SSP: Heartbeats: Event name "cardiac", sensors="MEG", do not use existing SSP.
Compute SSP: Eye blinks: Event name "blink", sensors="MEG", do not use existing SSP.
Double-click on Run01 to open the MEG.
You can change the color of events "standard" and "deviant" to make the figure more readable.Review the EOG and ECG channels and make sure the events detected make sense.
In the Record tab, menu SSP > Select active projectors.
- Blink: The first component is selected and looks good (display its topography to validate).
- Cardiac: The first component is selected and looks good (display its topography to validate).
Click on [Save].
Repeat the same operations for Run02:
- Review the events.
- Select the first cardiac component.
Bad segments
- At this point, you should review the entire files, by pages of a few seconds scrolling with the F3 key, to identify all the bad channels and the noisy segments of recordings. Do this with the the EOG channel open at the same time to identify saccades or blinks that were not completely corrected with the SSP projectors. As this is a complicated task that requires some expertise, we have prepared a list of bad segments for these datasets.
Open Run01. In the Record tab, select File > Add events from file:
File name: sample_auditory/data/S01_AEF_20131218_01_notch/events_bad_01.mat
- File type: Brainstorm (events*.mat)
It adds 12 bad segments to the file.
Open Run02. In the Record tab, select File > Add events from file:
File name: sample_auditory/data/S01_AEF_20131218_02_notch/events_bad_02.mat
- File type: Brainstorm (events*.mat)
It adds 9 bad segments and 16 saccades to the file.
Saccades
- Run02 contains a few saccades that generate a large amount of noise in the MEG recordings. They are not identified well by the automatic detection process based on the horizontal EOG. We have marked some of them, you have already loaded these events together with the bad segments. We are going to use again the SSP technique to remove the spatial components associated with these saccades.
Open the MEG recordings for Run02 and select the right-frontal sensors (Record tab > CTF RF).
In the Record tab, menu SSP > Compute SSP: Generic
Event name=saccade, Time=[0,500]ms, Frequency=[1,15]Hz, Use existing SSP
Example of saccade without correction:
With the first component of saccade SSP applied:
- This first component removes really well the saccade, keep it selected and click on [Save].
Bad channels
During the visual exploration, some channels appeared generally noisier than the others. Example:
Right-click on Run01 > Good/bad channels > Mark some channels as bad
> MRT51, MLO52Right-click on Run02 > Good/bad channels > Mark some channels as bad
> MRT51, MLO52, MLO42, MLO43
Epoching and averaging
Import recordings
To import epochs from Run01:
Right-click on the "Link to raw file" > Import in database
Use events: "standard" and "deviant"
Epoch time: [-100, +500] ms
- Apply the existing SSP (make sure that you have 2 selected projectors)
Remove DC offset based on time window: [-100, 0] ms
UNCHECK the option "Create a separate folder for each epoch type", this way all the epochs are going to be saved in the same Run01 folder, and we will able to separate the trials from Run01 and Run02.
Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot).
Repeat the same operation for Run02:
Right-click on the "Link to raw file" > Import in database
Use the same options as for the previous run.
Average responses
- As said previously, it is usually not recommended to average recordings in sensor space across multiple acquisition runs because the subject might have moved between the sessions. Different head positions were recorded for each run, we will reconstruct the sources separately for each each run to take into account these movements.
- However, in the case of event-related studies it makes sense to start our data exploration with an average across runs, just to evaluate the quality of the evoked responses. We have seen that the subject almost didn't move between the two runs, so the error would be minimal. We will compute now an approximate sensor average between runs, and we will run a more formal average in source space later.
- We have 80 good "deviant" trials that we want to average together.
Select the trial groups "deviant" from both runs in Process1, run process "Average > Average files"
Select the option "By trial group (subject average)"
- To compare properly this "deviant" average with the other condition, we need to use the same number of trials in the "standard" condition. We are going to pick 40 "standard" trials from Run01 and 40 from Run02. To make it easy, let's take the 40 first good trials.
Select the 41 first "standard" trials of Run01 + the 41 first "standard" trials of Run02 in Process1.
This will sum to 80 selected files, because the Process1 tab ignores the bad trials (trial #37 is bad in Run01, trial #36 is bad in Run02)Run again process "Average > Average files" > "By trial group (subject average)"
The average for the two conditions "standard" and "deviant" are saved in the folder (intra-subject). The channel file added to this folder is an average of the channel files from Run01 and Run02.
Visual exploration
- Display the two averages, "standard" and "deviant":
Right-click on average > MEG > Display time series
Right-click on average > MISC > Display time series (EEG electrodes Cz and Pz)
Right-click on average > MEG > 2D Sensor cap
In the Filter tab, add a low-pass filter at 100Hz.
Right-click on the 2D topography figures > Snapshot > Time contact sheet.
Here are results for the standard (top) and deviant (bottom) beeps:
P50: 50ms, bilateral auditory response in both conditions.
N100: 95ms, bilateral auditory response in both conditions.
MMN: 100-200ms, mismatch negativity in the deviant condition only (detection of deviant).
P200: 170ms, in both conditions but much stronger in the standard condition.
P300: 300-400ms, deviant condition only (decision making in preparation of the button press).
Standard (right-click on the topography figure > Snapshot > Time contact sheet) :
Deviant:
Difference deviant-standard
- In the Process2 tab, select the deviant average (Files A) and the standard average (Files B).
Run the process "Other > Difference A-B"
The difference deviant-standard does not show anymore the early responses (P50, P100) but emphasizes the difference in the later process (MMN/P200 and P300).
Source estimation
Head model
Select the two imported folders at once, right-click > Compute head model
Use the overlapping spheres model and keep all of the options at their default values.
For more information: ?Head model tutorial.
Noise covariance matrix
- We want to calculate the noise covariance from the empty room measurements and use it for the other runs.
In the Noise folder, right-click on the Link to raw file > Noise covariance > Compute from recordings.
Keep all the default options and click [OK].
Right-click on the noise covariance file > Copy to other conditions.
You can double-click on one the copied noise covariance files to check what it looks like:
For more information: ?Noise covariance tutorial.
Inverse model
Select the two imported folders at once, right-click > Compute sources
Select dSPM and keep all the default options.
Then you are asked to confirm the list of bad channels to use in the source estimation for each run. Just leave the defaults, which are the channels that we set as bad earlier.
One inverse operator is created in each condition, with one link per data file.
For more information: ?Source estimation tutorial.
Average in source space
- Now we have the source maps available for all the trials, we average them in source space.
Select the folders for Run01 and Run02 and the [Process sources] button on the left.
Run process "Average > Average files":
Select "By trial group (subject average)"
Double-click on the source averages to display them (standard=top, deviant=bottom).
Note that opening the source maps can be very long because of the filters for visualization. Check in the Filter tab, you probably still have a 100Hz low-pass filter applied for the visualization. In the case of averaged source maps, the 15000 source signals are filtered on the fly when you load a source file. This can take a significant amount of time. You may consider unchecking this option if the display is too slow on your computer.
Standard: (Right-click on the 3D figures > Snapshot > Time contact sheet)
Deviant:
Movies: Right-click on any figure > Snapshot > Movie (time): All figures (click to download video)
Difference deviant-standard
- In the Process2 tab, select the average of the deviant sources (Files A) and the average of the standard sources (Files B).
Run the process "Other > Difference A-B"
Double-click on the difference to display it, explore it in time.
- The first observations we can make are the following:
P50: No important difference.
N100: Stronger response in the right auditory system for the deviant condition.
MMN (125ms): Stronger response for the deviant (left auditory, right temporal/frontal/motor).
P200 (175ms): Stronger response in the auditory system for the standard condition.
After 200ms: Stronger response in the deviant condition (left auditory, left motor, right auditory, right temporal, right motor, right parietal)
Alternatively, you could calculate the difference of the average of all the "deviant" (80) and all the "standard" (388) trials, using the process "Other > Weighted difference". It is an attempt to compensate for the difference of number of trials.
Student's t-test
- Using a t-test instead of the difference of the two averages, you can reproduce similar results but with a significance level attached to each value. With this test, we can also use all the trials we have, unlike the difference of the means: The t-test behaves very well with imbalanced designs, we can keep all the standard trials.
- In the Process2 tab, select the following files:
Files A: All the deviant trials, with the [Process sources] button selected.
Files B: All the standard trials, with the [Process sources] button selected.
Run the process "Test > Student's t-test", Equal variance, Absolute value of average.
- Double-click on the t-test file to open it. Set the options in the Stat tab:
p-value threshold: 0.05
Multiple comparisons: FDR
Control over dimensions: 1.Signals and 2.Time
Explore the results in time.
Regions of interest
Manual tracing
- Let's place all the regions of interest starting from the easiest to identify.
- Open the average source files (standard and deviant), together with the average recordings for the standard condition for an easier time navigation.
In the Surface tab, smooth the cortical surface at 70%.
- For each region: go to the indicated time point, adjust the amplitude threshold in the Surface tab, identify the area of interest, click on its center, grow the scout, rename it.
Grow all the regions to the same size: 20 vertices.
- Note that all the following screen captures are produced with a low-pass filter at 100Hz.
A1L: Left primary auditory cortex (Heschl gyrus)
- The most visible region in both conditions. Active during all the main steps of the auditory processing: P50, N100, MMN, P200, P300.
Standard condition, t=90ms, amplitude threshold=70%
A1R: Right primary auditory cortex (Heschl gyrus)
- The position of this region is a lot less obvious than A1L, we don't see one focal region with a sustained activity. These binaural auditory stimulations should be generating similar bilateral responses in both left and right auditory cortices at early latencies. Possible explanations for this observation:
- The earplug was not adjusted on the right side and the sound was not well delivered.
- The subject's hearing from the right ear is impaired.
- The response is actually stronger in the left auditory cortex for this subject.
- The orientation of the source makes it difficult to capture for the MEG sensors.
- We are trying to find a region that peaks at the same time as A1L (95ms and 200ms in the standard condition). It is very difficult to find anything that behaves this way in both the deviant and the standard condition, so we will pick something very approximate, knowing that we cannot really rely on this region. The auditory system is very dynamic, in squared centimeters of cortex we can observe many functionally independent regions activated at different moments.
Deviant condition, t=34ms, amplitude threshold=20%
Results are ok for the deviant condition but not so good for the standard condition.
- The position of this region is a lot less obvious than A1L, we don't see one focal region with a sustained activity. These binaural auditory stimulations should be generating similar bilateral responses in both left and right auditory cortices at early latencies. Possible explanations for this observation:
IFGL: Left inferior frontal gyrus (Brodmann area 44)
- Involved in the auditory processing, particularly while processing irregularities.
- You can use the atlas "Brodmann-thresh" available in the Scout tab for identifying this region.
Deviant condition, t=140ms, amplitude threshold=40%
IFGR: Right inferior frontal gyrus (Brodmann area 44)
- Expected to have an activity similar to the left IFG.
Deviant condition, t=110ms, amplitude threshold=40%
M1L: Left motor cortex
- The subject taps with the right index when a deviant is presented.
- The motor cortex responds at very early latencies together with the auditory cortex, in both conditions (50ms and 100ms). The subject is ready for a fast response to the task.
- At 175ms, the peak in the standard condition probably corresponds to an inhibition: the sound heard is not a deviant, there is no further motor processing required.
- At 225ms, the peak in the deviant condition is probably a motor preparation. At 350ms, the motor task begins, the subject moves the right hand (recorded reaction times 500ms +/- 108ms).
Deviant condition, t=240ms, amplitude threshold=50%
M1R: Right motor cortex
- Probably involved in the preparation of the motor response as well. Less recruited during the actual motor command.
Deviant condition, t=35ms, amplitude threshold=25%
PPCR: Right posterior parietal cortex
- Known to play a role as a relay in the auditory processing.
Deviant condition, t=225ms, amplitude threshold=60%
Influence of the number of trials
- We have decided to run the source analysis on the same number of trials for both conditions. We have been working so far with an average of the standard condition calculated from 80 trials. Just out of curiosity, we can recalculate another average with all the good standard trials (388).
Here are the scouts traces for both averages (80 trials in green, 388 trials in red):
- As expected, the signal is cleaner in the average with more trials, but it is interesting to note that the overall shape of the traces does not change. The main effects observed are similar, the latencies are identical, multiplying the number of trials by five does not change much the interpretation.
Scripting
The operations described in this tutorial can be reproduced from a Matlab script, available in the Brainstorm distribution: brainstorm3/toolbox/script/tutorial_auditory.m