= MEG median nerve tutorial (CTF) = ''Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet'' The dataset presented in this tutorial was used in the previous generation of introduction tutorials. We kept it on the website as an additional illustration of data analysis, and as a comparison with median nerve experiments recorded with other MEG systems ([[Tutorials/TutMindNeuromag|Elekta-Neuromag]], [[Tutorials/Yokogawa|Yokogawa]]). No new concepts are presented in this tutorial. For in-depth explanations of the interface and theoretical foundations, please refer to the [[http://neuroimage.usc.edu/brainstorm/Tutorials#Get_started|introduction tutorials]]. <> <> == Download and installation == * '''Requirements''': You have already followed all the basic tutorials and you have a working copy of Brainstorm installed on your computer. * Go to the [[http://neuroimage.usc.edu/bst/download.php|Download]] page of this website, and download the file: '''sample_raw.zip ''' * Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder). * Start Brainstorm (Matlab scripts or stand-alone version). * Select the menu File > Create new protocol. Name it "'''TutorialRaw'''" and select the options: * '''No, use individual anatomy''', * '''No, use one channel file per run'''. == Import the anatomy == * Create a new subject Subject01. * Right-click on the subject node > Import anatomy folder: * Set the file format: "FreeSurfer folder" * Select the folder: '''sample_raw/anatomy''' * Number of vertices of the cortex surface: 15000 (default value) * Click on the link "'''Click here to compute MNI transformation'''". * Set the 3 required fiducial points (indicated in MRI coordinates): * NAS: x=127, y=212, z=123 * LPA: x=55, y=124, z=119 * RPA: x=200, y=129, z=114 * At the end of the process, make sure that the file "cortex_15000V" is selected (downsampled pial surface, that will be used for the source estimation). If it is not, double-click on it to select it as the default cortex surface. <
><
> {{attachment:anat.gif||height="175",width="368"}} == Link the recordings == * Switch to the "functional data" view (middle button in the toolbar above the database explorer). * Right click on the Subject01 > '''Review raw file''': * Select the file format: '''MEG/EEG: CTF''' * Select the folder: '''sample_raw/Data/subj001_somatosensory_20111109_01_AUX-f.ds''' * Refine the registration with the head points: '''YES'''.<
><
> {{attachment:refineBefore.gif||height="229",width="200"}} {{attachment:refineAfter.gif||height="229",width="200"}} == Pre-processing == === Evaluate the recordings === * Drag and drop the "Link to raw file" into the Process1 list. * Select the process "'''Frequency > Power spectrum density'''", configure it as follows: <
><
> {{attachment:psdRun.gif||height="265",width="422"}} * Double-click on the PSD file to display it. It shows the estimate of the power spectrum for the first 50 seconds of the continuous file, for all the sensors, with a logarithmic scale. You can identify four peaks at the following frequencies: 60Hz, 120Hz, 180Hz and 300Hz. The first three are related with the power lines (acquisition in Canada, where the electricity is delivered at 60Hz, plus the harmonics). The last one is an artifact of the low-pass filter at 300Hz that was applied on the recordings at the acquisition time. <
><
> {{attachment:psdBefore.gif||height="187",width="396"}} === Remove: 60Hz and harmonics === * In Process1, keep the "Link to raw file" selected. * Run '''Pre-process > Notch filter''': Frequencies to remove = '''60, 120, ''''''180 Hz'''. <
><
> {{attachment:processSin.gif||height="252",width="307"}} * To evaluate the results of this process, select the new filtered file ('''"Raw | notch"''') and run again the process "'''Frequency > Power spectrum density'''". <
><
> {{attachment:processPsd2.gif||height="275",width="500"}} * You should observe a significant decrease of the contributions of the removed frequencies (60Hz, 120Hz, 180Hz) compared with the original spectrum. <
><
> {{attachment:psdAfter.gif}} === Heartbeats and blinks === Signal Space Projection (SSP) is a method for projecting the recordings away from stereotyped artifacts, such as eye blinks and heartbeats. * Double-click on the filtered continuous file to display all the '''MEG '''recordings. * Right-click on the link > '''ECG '''> Display time series, to look at the heartbeats. * Right-click on the link > '''EOG '''> Display time series, to look at the eye movements. * From the Artifacts menu in the Record tab, run the detection processes: * '''Detect heartbeats:''' Select channel '''EEG057''', event name "cardiac". <
><
>{{attachment:detectEcg.gif}} * '''Detect eye blinks:''' Select channel '''EEG058''', event name "blink". <
><
> {{attachment:detectEog.gif}} * '''Remove simultaneous:''' To avoid capturing ocular artifacts in the cardiac SSP. <
><
> {{attachment:processRemoveSimult.gif}} * Review the traces of ECG/EOG channels and make sure the events detected make sense. <
><
> === Compute SSP: Heartbeats === In the Record tab, select the menu "'''SSP > Compute SSP: Heartbeats'''". Set the options: * '''Event name''': Name of the event to use to calculate the projectors, enter "'''cardiac'''" * '''Sensor types''': Type of sensors for which the projection should be calculated ("'''MEG'''"). It doesn't matter if there are other sensor types indicated in the text box: the sensor names or types that do not exist in the recordings are just ignored. Note that you will always get better results if you process the different types of sensors separately. If you have MEG and EEG in the same file, you should run twice this process, once for the EEG only and once for the MEG only. Same thing when processing Elekta-Neuromag recordings, you should process separately the magnetometers (MEG MAG) and the gradiometers (MEG GRAD). * '''Compute using existing SSP projectors''': You have the option to calculate the projectors from the raw recordings, or from the recordings filtered with the previously computed SSP projectors. Unless you have a good reason for not considering the existing SSP projectors, you should always select this option. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEogFile.gif|sspEogFile.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEcg.gif|sspEcg.gif|height="212",width="309",class="attachment"}} After the computation is done, a new figure is displayed, that lets you select the active projectors. On the left you have the projector categories (matrix '''U''') where each row represents the result of an execution of a "Compute SSP " process (usually one for each sensor type and each artifact). On the right, you can activate independently the different components for the selected projector category. Each entry represents the projector created from a column vector '''U'''i (singular vector of the SVD decomposition), and the percentage is the singular value for this vector, normalized for this decomposition (percentage = '''S'''i / sum('''S'''i)). When a projector is selected, it means that it is applied to the recordings on the fly when reading from the continuous file, ie. that the corresponding spatial component is removed from the recordings. The percentage indicates the amount of signal that was captured by this projector during the decomposition. The highest it is, the more the projector is representative of the artifact recordings that were used to calculate it. In the good cases, you would typically see one to three components with values that are significantly higher that the others. By default, the software selects '''the first component''' and leaves all the others unselected. Do not trust this automatic selection, you should always review manually the components that you want to remove. A list of buttons is available to manipulate the projector categories: load, save, rename, delete, and '''display component topography'''. The last one is one of the most useful tool to understand these results. Select the category ("cardiac"), click on the first component, then click on the toolbar button to display its topography. Do it again for components #2 and #3 (the components don't need to be active for that). The other button [No interp] displays the same results without the magnetic field smoothing that is applied by default to MEG topographies. It may help understand some difficult cases. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEcgCheck.gif|sspEcgCheck.gif|height="207",width="390",class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEcgComp.gif|sspEcgComp.gif|class="attachment"}} None of these topographies can be clearly related to a cardiac component. This can have two interpretations: the cardiac artifact is not very strong for this subject and the influence of the heart activity over the MEG sensors is completely buried in the noise or the brain signals, or the characterization of the artifact that we did was not so good and we should refine it by selecting for instance smaller time windows around the cardiac peaks. Here, it's probably due to the subject's morphology. Some people generate strong artifacts in the MEG, some don't. In this case, it is safer to '''unselect '''this "cardiac" category, rather than removing randomly spatial components that we are not sure to identify. === Compute SSP: Eye blinks === Let's try the same thing with the eye blinks. Select the process "'''SSP > Compute SSP: Eye blinks'''". Run it on the event type "blink", that indicate the peaks of the EOG signal. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEogFile.gif|sspEogFile.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEog.gif|sspEog.gif|height="214",width="307",class="attachment"}} In this specific case, there is one value that is much higher than the others (21%). The first component is most likely a good representation of the artifact we are trying to remove. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEogSelect.gif|sspEogSelect.gif|height="214",width="384",class="attachment"}} Select the cardiac category, and display the topographies for the first three components. Component #1 is clearly related with the ocular activity occurring during the blinks, the others are not as clear. Conclusion: the default proposition was good (only component #1 selected). Setting this first projector as active removes from the MEG signals the contributions related with the spatial topography we can see in the first figure, ie. a spatial distribution of sensor values related specifically to the eye blinks. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEogComp.gif|sspEogComp.gif|class="attachment"}} While the recordings are displayed in the file viewer, the selection of the active projectors interactively updates the recordings. Go to the first detected "blink" event ('''t=33.600s'''), and select the left frontal sensors ('''Shift+B''' or right-click > Display setup > CTF LF). Then play with the checkboxes to change the active projectors, and see how well this first component of the blink category removes the ocular artifact. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=sspEogMeg.gif|sspEogMeg.gif|class="attachment"}} Select the projector category "blink", select the first component, and click on Save, to validate your changes. Starting from now, all the successive operations you apply on this file will integrate this eye blink correction, including the calculation of other projectors. After you closed this window, you can get back to it by selecting the menu "'''SSP > Select active projector'''" in the Record tab, and change again the selection of active projectors. We are not going to calculate the SSP for the other category of events identified on the EOG, the group "'''blink2'''". The events classified in this group could represent eye saccades or other types of blinks, it is not very clear that they correspond to the same physiological event. Most of them do not seem to create any obvious contamination on the MEG recordings, there is probably no need to consider them now. == Evaluation == === Eye blinks === One efficient way of representing the impact of this artifact correction is to epoch the recordings around the artifacts before and after the correction. Use the same time window as the one used in the SSP options around each marker, and average all the segments of recordings. These operations are detailed in the next tutorial, we are just presenting the results. You don't need to reproduce them at this time, just remember that it is doable, in case you need it with your own data. This is what the database would look like after these operations: {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=blinkDb.gif|blinkDb.gif|height="263",width="283",class="attachment"}} The following image represents the file '''"blink_uncorrected / Avg: blink"''', which is the average of the 18 identified blinks '''before''' the SSP correction, [-200,+200]ms around the "blink" events. The time series, the 2D topography at the peak (t = 0ms), and the mapping on the cortex of these fields, using the basic inverse model calculated in the previous tutorial. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=blinkBefore.gif|blinkBefore.gif|height="177",width="533",class="attachment"}} The next image represents the file '''"blink_ssp / Avg: blink"''', which is the exact same thing, but '''after '''the SSP correction. The artifact is gone. If you do this and you can still see some clear evoked component in the time series figure, the SSP correction was not efficient: the artifact is not properly identified, or you should select different components using the "Select active projectors" window. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=blinkAfter.gif|blinkAfter.gif|height="177",width="533",class="attachment"}} === Heartbeats === We can do the same thing with the heartbeats: epoch [-40,+40]ms around the "cardiac" events, average the trials, and review the event-related fields at three instants: The peak of the '''P wave''' (t = '''-30ms'''): {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=cardiacP.gif|cardiacP.gif|height="178",width="534",class="attachment"}} The peak of the '''QRS complex''' (t = '''0ms'''): {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=cardiacQRS.gif|cardiacQRS.gif|height="178",width="534",class="attachment"}} The peak of the '''T wave''' (t = '''25ms'''): {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp?action=AttachFile&do=get&target=cardiacT.gif|cardiacT.gif|height="178",width="534",class="attachment"}} These peaks may look big, but they are in fact much smaller (300 fT for the average of 346 repetitions) than what we observed for the eye blinks (1500 fT for 18 repetitions). You can notice that none of these topographies are similar to the components we obtained after calculating the SSP for the cardiac artifact. We don't know how to correct this artifact, it doesn't look too bad in terms of recordings contamination, so we just leave it uncorrected. = C3. Epoching and averaging = ''Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet'' This tutorial fills the gap between the previous tutorial (review and clean continuous recordings), and the introduction tutorials (source analysis of evoked responses). It explains how to epoch the recordings, do some more pre-processing on the single trials, and then calculate their average. For this tutorial, we are going to use the protocol '''TutorialRaw''' created in the two previous tutorials: [[http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawViewer|Review continuous recordings and edit markers]] and [[http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawSsp|Artifact cleaning]]. If you have not followed these tutorials yet, please do it now. == Import in database == The raw file viewer provides a rapid access to the recordings, but most of operations cannot be performed directly on the continuous files: most of the pre-processing functions, averaging, time-frequency analysis and statistical tests can only be applied on blocks of data that are saved in the database (ie. "imported"). After reviewing the recordings and editing the event markers, you have to "import" the recordings in the database to go with further analysis. Right-click on the file with power line correction:''' Raw | notch(60Hz 120Hz 180Hz) > Import in dabase''' {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=importMenu.gif|importMenu.gif|class="attachment"}} Warning: If you right-click on the subject node > Import EEG/MEG, and select the tutorial .ds dataset, you would be able to import blocks of the continuous file in the database, but you would not have access to the modified events list or the SSP operators. Therefore, you would not import data cleaned of ocular and cardiac artifacts. The modified events list and the signal space projectors are saved only in the "Link to raw file" in the Brainstorm database, not in the initial continuous file. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=importOptions.gif|importOptions.gif|class="attachment"}} Set the import options as they are represented in this figure: * '''Time window''': Time range of interest. We are interested by all the stimulations, so do not change this parameter; the default values always represent the entire file. * '''Split''': Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here. * '''Events selection''': Check the "Use events" option, and select both "left" and "right". The number in the parenthesis represents the number of occurrences of this event in the selected time window (would change if you modify the time definition on top of the figure) * '''Epoch time''': Time segment that is extracted around each marker and saved in the database. Set it to [-100, +300] ms * '''Use Signal Space Projections''': Use the active SSP projectors calculated during the previous pre-processing steps. Keep this option selected. * '''Remove DC Offset''': Check this option, and select: Time range: [-100, 0] ms. For each epoch, this will: compute the average of each channel over the baseline (pre-stimulus interval: -100ms to 0ms), and subtract it from the channel at every time instants (full epoch interval: [-100,+300]ms). This option removes the baseline value of each sensor, ie. the continuous (DC) offset that is added permanently on top of the recordings of interest. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps. Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can replace efficiently this DC correction. If a high-pass filter has already been applied to the recordings, you may want to unselect this option. * '''Resample recordings''': Keep this unchecked * '''Create a separate folder for each epoch type''': If selected, a new folder is created for each event type (here, it will create two folders in the database: "left" and "right"). If not selected, all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. Click on Import and wait. At the end, you are asked whether you want to ignore one epoch that is shorter than the others. This happens because the acquisition of the MEG signals was stopped less than 300ms after the last stimulus trigger was sent. Therefore, the last epoch cannot have the full [-100,300]ms time definition. This shorter epoch would prevent us from averaging all the trials easily. As we already have enough repetitions in this experiment, we can just ignore it. Answer '''Yes''' to this question to discard the last epoch. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=importShortEpoch.gif|importShortEpoch.gif|class="attachment"}} Two new conditions containing two groups of trials appeared in the database. To expand a group of trials and get access to the individual epochs: double-click on it or click on the "+" next to it. The SSP projectors calculated in the previous tutorial were applied on the fly when reading from the continuous file. These epochs are clean from eye blinks and power line contamination. All the files available in the ''(Common files)'' folder are also shared for these new folders "left" and "right". == Review the individual trials == Double-click on the first trial for the "left" condition. Then right-click on the figure > Navigator > '''Next data file''', or use the keyboard shortcut '''F3 '''to jump to the next trial. This way you can quickly review all the trials, to make sure that there is no obvious problem in the recordings. If you haven't reviewed manually all the recordings in the continuous mode, and marked all the bad segments, it is a good time to do it now. To mark a trial as bad manually, you have three methods: * Right-click on the trial file in the database > '''Reject trial''' * Right-click on the figure > '''Reject trial''' * Use the keyboard shortcut '''Control+B''' * To set all the trials back as good in a group: right-click on the trials group or the condition > Accept bad trials. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectManual.gif|rejectManual.gif|class="attachment"}} When a trial is tagged as bad, its icon in the database explorer shows a red mark. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectTree.gif|rejectTree.gif|class="attachment"}} All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs. If you drag and drop the 101 left trials in the Process1 list, with one trial marked as bad, the summary of the selected files on top of the tab would show only 100 data files. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectProcess.gif|rejectProcess.gif|class="attachment"}} To learn how to mark only individual channels as bad instead of the entire trial, please go back to the introduction tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/TutExploreRecodings|Exploring the recordings]]. The process "'''Artifacts > Detect bad channels: peak-to-peak'''" can help you detect some bad trials based on a peak-to-peak amplitude threshold. Drag and drop all the trials from the '''left''' condition in the Process1 list, and select this process. For each trial, it detects the channels that have a peak-to-peak amplitude (maximum value - minimum value) that is above a rejection threshold (too noisy) or below a rejection threshold (no signal). You can define different thresholds for different channel types. The options are: * '''Time window''': Time range on which the peak-to-peak amplitude is calculated on each trial * '''Thresholds''': Rejection and detection thresholds for each sensor type. For CTF recordings, use the "MEG gradio" category, and ignore the "/cm (x 0.04)" indication, that is valid only for Neuromag systems. Set the '''MEG gradio''' thresholds to '''[0, 2000] fT''', to detect the MEG channels that have a peak-to-peak amplitude superior to 2000fT. * '''Reject only the bad channels''': This would tag as bad only the detected channels, but would not tag the trial itself as bad. * '''Reject the entire trial''': This would tag as bad the trial if there is at least one channel file identified as bad. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=rejectOptions.gif|rejectOptions.gif|class="attachment"}} For this current dataset, the quality of the recordings is remarkably good, we don't need to mark any trial as bad. So right-click on the group of "left" trials > Accept trials, to set them all as good. == Averaging == Drag and drop the two lists of trials (or the two condition folders) in the Process1 list, and run the process "'''Average > Average files'''", with the option "'''Average by condition (subject average)'''". This will group the trials by conditions, and calculate one average per condition and subject. The option "By condition (grand average)" would average each condition for all the subjects together, but as there is only one subject in this case, the result would be same. The other options would generate one average file per subject ("by subject"), one average file per group of trials and per subject ("by trial group"), or only one average no matter what the input is ("everything"). The function to apply is the regular arithmetic average. The option "'''Keep all the events from the individual epochs'''" would group all the event markers present in all the epochs and save them in the new averaged file. It can be useful to check the relative position of the artifacts or the subject responses, or quickly detect some unwanted configuration such as a subject who would constantly blink right after a visual stimulus. We don't really need this here, leave this option '''unselected'''. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avgOptions.gif|avgOptions.gif|class="attachment"}} It creates two files: "'''Avg: left'''" and "'''Avg: right'''". {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avgDb.gif|avgDb.gif|class="attachment"}} Double-click on the "'''Avg: Left'''" file to display the MEG recordings for this file (or right-click > MEG > display time series). It shows a very typical an clean evoked response, with a very high signal-to-noise ratio. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avgTs.gif|avgTs.gif|class="attachment"}} Just for the records, this is how it would look like if we had selected the option "Keep all the events from the individual epochs". The summary of the markers of all the epochs is: 37 heartbeats and 2 blinks. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avgTsEvents.gif|avgTsEvents.gif|class="attachment"}} == Stimulation artifact == Now zoom around 4ms in time (mouse wheel, or two figures up/down on macbooks) and amplitude (control + zoom). Notice this very strong and sharp peak followed by fast oscillations. This is an artifact due to the electric stimulation device. In the stimulation setup: the stimulus trigger is initiated by the stimulation computer and sent to the electric stimulator. This stimulator generates an electric pulse that is sent to electrodes on the subject's wrists. This electric current flows in the wires and the in the body, so it also generates a small magnetic field that is captured by the MEG sensors. This is what we observe here at 4ms. This means that whenever we decide to send an electric stimulus, there is a '''4ms delay''' before the stimulus is actually received by the subject, due to all the electronics in between. Which means that everything is shifted by 4ms in the recordings. These hardware delays are unavoidable and should be quantified precisely before starting scanning subjects or patients. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avgStim.gif|avgStim.gif|class="attachment"}} Then you have two options: either you remember it and subtract the delays when you publish your results (there is a risk of forgetting about them), or you fix the files right now by fixing the time reference in all the files (there is a risk of forgetting to fix all the subjects/runs in the same way). Let's illustrate this second method now. Close all the figures, clear the Process1 list (right-click > clear, or select+delete key), and drag and drop all the trials and all the averages (or simply the two left and right condition folders). Select the process "'''Pre-process > Add time offset'''". Set the time offset to '''-4.2 ms''', to bring back this stimulation peak at 0ms. Select also the "'''Overwrite input files'''" option, to avoid too many duplications of unnecessary files in the database. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=offsetOptions.gif|offsetOptions.gif|class="attachment"}} This fixes the individual trials and the averages. Double-click on the "'''Avg: left'''" file again to observe that the stimulation artifact is now occurring at 0ms exactly, which means that t=0s represents the time when the electric pulse is received by the subject. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=offsetDone.gif|offsetDone.gif|class="attachment"}} == Filters for visualization == As introduced in the previous tutorials, you can add a visualization filter to make the traces and the topographies look smoother. We suggest in this case a low-pass filter at 120Hz, because it shows very smooth traces, but all the waves we are interested in are still clearly visible in the MEG time series figure. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=onlineFilter.gif|onlineFilter.gif|class="attachment"}} == Explore the average == Open the time series for the "'''Avg: left'''". Then press '''Control+T''', to see on the side a spatial topography at the current time point. Then observe what is happening between 0ms and 100ms. Start at 0ms and play it like a movie using the arrow keys left and right, to follow the brain activity millisecond by millisecond: * '''16 ms''': First response, the sensory information reaches the right somatosensory cortex (S1) * '''30 ms''': Stronger and more focal activity in the same region, but with a source oriented in the opposite direction (S1) * '''60 ms''': Activity appearing progressively in a more lateral region (S1 + S2) * '''70 ms''': Activity in the same area in the left hemisphere (S2 left + S2 right) {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avg16.gif|avg16.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avg30.gif|avg30.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avg60.gif|avg60.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=avg70.gif|avg70.gif|class="attachment"}} == Source analysis == Let's reproduce the same observations at the source level. The concepts related with the source estimation are not discussed here; for more information, refer to the introduction tutorials #6 to #8. First, '''delete''' all the files related with the source estimation calculated in the previous tutorials, available in the ''(Common files)'' folder: the head model, the noise covariance and inverse model. We can now provide a better estimate of the noise (affects the inverse model), and we defined new SSP operators (affects the head model). === Head model === Right-click on any node that contains the channel file (including the channel file itself), and select: "'''Compute head model'''". Leave all the default options: cortex source space, and overlapping spheres. The lead field matrix is saved in file "Overlapping spheres" in ''(Common files)''. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=forward.gif|forward.gif|class="attachment"}} === Noise covariance matrix === To estimate the sources properly, we need an estimation of the noise level for each sensor. A good way to do this is to compute the covariance matrix of the concatenation of the baselines from all the trials in both conditions. * Select at the same time the two groups of trials (right and left). To do this: hold the Control (or Cmd on Macs) key and click successively on the Right and the Left trial lists. * Right-click on one of them and select: Noise covariance > Compute from recordings. Set the baseline to '''[-104,-5] ms''', to consider as noise everything that happens before the beginning of the stimulation artifact. Leave the other options to the default values. Click on Ok. * This operation computes the noise covariance matrix based on the baseline of all the good trials (199 files). The result is stored in a new file "Noise covariance" in the ''(Common files)'' folder. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=noisecov.gif|noisecov.gif|class="attachment"}} === Inverse model === Right-click on ''(Common files)'', on the head model or on the subject node, and select "'''Compute sources'''". A shared inversion kernel is created in ''(Common files)''; a link node is now visible for each recordings file, single trials and averages. For more information about what these links mean and the operations performed to display them, please refer to the [[http://neuroimage.usc.edu/brainstorm/Tutorials/TutSourceEstimation|tutorial #8 "Source estimation"]]. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=inverseDb.gif|inverseDb.gif|class="attachment"}} === Explore the sources === Right-click on the sources for the left average > Cortical activations > '''Display on cortex''', or simply double click on the file. Go to the main response peak at '''t = 30ms''', and increase the '''amplitude threshold''' to '''100%'''. You see a strong activity around the right primary somatosensory cortex, but there are still lots of brain areas that are shown in plain red (value >= 100% maximum of the colorbar ~= 280 pA.m). {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=inverse100.gif|inverse100.gif|class="attachment"}} The colorbar maximum is set to an estimation of the maximum source amplitude over the time. This estimation is done by finding the time instant with the highest global field power on the sensors (green trace GFP), estimating the sources for this time only, and then taking the maximum source value at this time point. It is a very fast estimate, but not very reliable; we use it because calculating the full source matrix (all the time points x all the sources) just for finding the maximum value would be too long. In this case, the real maximum is probably higher than what is used by default. To redefine the colorbar maximum: right-click on the 3D figure > '''Colormap: sources > Set colorbar max value'''. Set the maximum to '''480 pA.m''', or any other value that would lead to see just one very focal point on the brain at 30ms. {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=inverse480.gif|inverse480.gif|class="attachment"}} Go back to the first small peak at '''t=16ms''', and lower the threshold to '''10%'''. Then do what you did with at the sensor level: follow the information processing in the brain until 100ms, millisecond by millisecond, adapting the threshold and the camera position when needed: * '''16 ms''' (top-left): First response, primary somatosensory cortex (S1 right) * '''30 ms''' (top-right): S1 right * '''60 ms''' (bottom-left): Secondary somatosensory cortex (S2 right) * '''70 ms''' (bottom-right): Activity ipsilateral to the stimulus (S2 left + S2 right) {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=sources16.gif|sources16.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=sources30.gif|sources30.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=sources60.gif|sources60.gif|class="attachment"}} {{http://neuroimage.usc.edu/brainstorm/Tutorials/TutRawAvg?action=AttachFile&do=get&target=sources70.gif|sources70.gif|class="attachment"}} == Scripting == The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: '''brainstorm3/toolbox/script/''''''tutorial_raw.m ''' <)>><><)>>