MEG median nerve tutorial (CTF)

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

The dataset presented in this tutorial was used in the previous generation of introduction tutorials. We kept it on the website as an additional illustration of data analysis, and as a comparison with median nerve experiments recorded with other MEG systems (Elekta-Neuromag, Yokogawa). No new concepts are presented in this tutorial. For in-depth explanations of the interface and theoretical foundations, please refer to the introduction tutorials.

License

This tutorial dataset (MEG and MRI data) remains a property of the MEG Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. Its use and transfer outside the Brainstorm tutorial, e.g. for research purposes, is prohibited without written consent from the MEG Lab.

If you reference this dataset in your publications, please acknowledge its authors (Elizabeth Bock, Esther Florin, Francois Tadel and Sylvain Baillet) and cite Brainstorm as indicated on the website. For questions, please contact us through the forum.

Presentation of the experiment

Experiment

  • One subject, one acquisition run of 6 minutes
  • Subject stimulated using Digitimer Constant Current Stimulator (model DS7A)
  • The run contains 200 electric stimulations randomly distributed between left and right:
    • 102 stimulations of the left hand
    • 98 stimulations of the right hand
  • Inter-stimulus interval: jittered between [1500, 2000]ms
  • Stimuli generated using PsychToolBox on Windows PC (TTL pulse generated with the parallel port connected to the Digitimer via the rear panel BNC)

MEG acquisition

  • Acquisition at 1200Hz, with a CTF 275 system, subject in seating position

  • Recorded at the Montreal Neurological Institute in November 2011
  • Anti-aliasing low-pass filter at 300Hz, files saved with the 3rd order gradient
  • Recorded channels (302):
    • 1 Stim channel indicating the presentation times of the stimuli: UPPT001 (#1)
    • 26 MEG reference sensors (#3-#28)
    • 272 MEG axial gradiometers (#29-#300)
    • 1 ECG bipolar (#301)
    • 1 vertical EOG bipolar (#302)
    • 1 Unused channels (#2)
  • 1 dataset: subj001_somatosensory_20111109_01_AUX-f.ds

Stimulation delays

  • Delay #1: 4 samples @ 1200Hz = 3.3ms. The CTF MEG system has an anti-aliasing filter (1/4 the sampling rate) applied at the time of recording. This filter is applied only to the MEG channels and not to the stimulus channels. Therefore the MEG has a 4 sample delay when compared with the stimulus channel.

Head shape and fiducial points

  • 3D digitization using a Polhemus Fastrak device (subj00111092011.pos)

  • Position of the anatomical references used for these recordings:
    The position of the CTF head localization coils, pictures available in file fiducials.jpg

  • Around 110 head points distributed on the hard parts of the head (no soft tissues)

Subject anatomy

  • Subject with 1.5T MRI
  • Processed with FreeSurfer 5.2

Download and installation

Import the anatomy

Review the recordings

Open the file

Right-click on the data file > MEG (all) > Display time series.

displayTsMenu.gif

You can see new information in the tab "Record" and a figure showing the recordings.

rawPanel.gif

Identify the unwanted frequencies

Drag and drop the "Link to raw file" in the Process1 tab, click on the button [Run] to open the Pipeline editor window and select the process "Frequency > Power spectrum density (Welch)". This will estimate the power spectrum of the signal using the Welch method. Set the time window to process to [0,50]s, and the window length to 4 sec, with an overlap of 50%, this will be more than enough to get a good estimate of the spectrum (average of the Fourier transform of 24 windows of 4800 samples each). Leave the other options to the default values.

psdRun.gif

Double click on the "Power (MEG)" file that was created under the "Link to raw file" in the database explorer. This shows the estimate of the power spectrum for the first 50 seconds of the continuous file, for all the sensors, with a logarithmic scale. You can identify four peaks at the following frequencies: 60Hz, 120Hz, 180Hz and 300Hz. The first three are related with the power lines (acquisition in Canada, where the electricity is delivered at 60Hz, plus the harmonics), and are expected to be coming from pure sinusoidal components in the signal. The last one is an artifact of the low-pass filter at 300Hz that was applied on the recordings at the acquisition time. We are going to ignore this one, as it is probably more complex and spanning over several frequency bins.

psdBefore.gif

Remove: 60Hz and harmonics

Leave the file "Link to raw file" in the Process1 list and now run "Pre-process > Notch filter".

sinRemoval.gif

The operation creates a new "Link to raw file" entry in the database, pointing to a new CTF dataset in the same folder as the original continuous file. To check where this file was created: right-click on the file "Raw | notch(60Hz 120Hz 180Hz)" > File > Show in file explorer. This file has the exact same properties as the original file, including the SSP projectors, only the values of the MEG sensors were updated.

sinRemovalDb.gif

Alternatives

If the notch filter is not giving satisfying result, you can use two other processes:

Evaluate the results

Now run the same frequency analysis: process "Frequency > Power spectrum density (Welch)", time window = [0,50]s, window length = 4000ms, overlap = 50%. Double click on the new "Power (MEG)" file. As expected, the three first peaks due to the power lines are gone.

psdAfter.gif

You can run a few other pre-processing operations directly on the continuous files, including a band-pass filter. Frequency filters and sinusoid removals are operations that you should apply at very early stages of the analysis, before epoching the recordings. These operations do not perform well next to the beginning and the end of the signals, they may generate important artifacts. It is therefore much more accurate to filter the entire recordings from the original continuous file at once, rather than filtering small epochs after importing them in the database.

From now on, we are only going to work on this clean file "Raw | notch(60Hz 120Hz 180Hz)".

Signal Space Projection

The Signal-Space Projection (SSP) is one approach to rejection of external disturbances. Here is a short description of the method by Matti Hämäläinen, from the MNE 2.7 reference manual, section 4.16.

Unlike many other noise-cancellation approaches, SSP does not require additional reference sensors to record the disturbance fields. Instead, SSP relies on the fact that the magnetic field distributions generated by the sources in the brain have spatial distributions sufficiently different from these generated by external noise sources. Furthermore, it is implicitly assumed that the linear space spanned by the significant external noise patterns has a low dimension.

Without loss of generality we can always decompose any n-channel measurement b(t) into its signal and noise components as:

Further, if we know that b''n''(t) is well characterized by a few field patterns b''1''...b''m'', we can express the disturbance as

where the columns of U constitute an orthonormal basis for b''1''...b''m'', c''n''(t) is an m-component column vector, and the error term e(t) is small and does not exhibit any consistent spatial distributions over time, i.e., C''e'' = E{ee''T''} = I. Subsequently, we will call the column space of U the noise subspace. The basic idea of SSP is that we can actually find a small basis set b''1''...b''m'' such that the conditions described above are satisfied. We can now construct the orthogonal complement operator

and apply it to b(t) yielding

since Pb''n''(t) = PUc''n''(t) ≈ 0. The projection operator P is called the signal-space projection operator and generally provides considerable rejection of noise, suppressing external disturbances by a factor of 10 or more. The effectiveness of SSP depends on two factors:

  1. The basis set b''1''...b''m'' should be able to characterize the disturbance field patterns completely and

  2. The angles between the noise subspace space spanned by b''1''...b''m'' and the signal vectors b''s''(t) should be as close to π/2 as possible.

If the first requirement is not satisfied, some noise will leak through because Pb''n''(t) ≠ 0. If the any of the brain signal vectors b''s''(t) is close to the noise subspace not only the noise but also the signal will be attenuated by the application of P and, consequently, there might by little gain in signal-to-noise ratio.

Since the signal-space projection modifies the signal vectors originating in the brain, it is necessary to apply the projection to the forward solution in the course of inverse computations.

For more information on the SSP method, please consult the following publications:

Identify the artifacts

The first step is to identify several repetitions of the artifact (the vectors b''1''...b''m''). We need to set markers in the recordings that indicate when the events that we want to correct for occur. To help with this task, it is recommended to always record with bipolar electrodes the activity of the eyes (electro-oculogram or EOG, vertical and horizontal), the heart (electro-cardiogram or ECG), and possibly other sources of muscular contaminations (electromyogram or EMG). In this example, we are going to use the ECG and vertical EOG traces to mark the cardiac activity and the eye blinks. Two methods can be used, manual or automatic.

EOG/ECG channels

Manual marking

Create a new category of markers "blink_manual", using the menu Events > Add group. Select this new group. Review the file, and mark the peaks you observe on the vertical EOG trace, using the Ctrl+E keyboard shortcut. Do that for a few eye blinks.

markEog.gif

You could repeat the same operation for all the blinks, then for all the ECG peaks and jump to the next chapter of the tutorial and compute the SSP. It would be uselessly time consuming, as there is a process that does it for you automatically. However, it is good to remember how to do it manually because you may face some cases where you don't have clean ECG/EOG, or if you want to correct for another type of artifact.

Automatic detection: EOG

In the Record tab, select the menu: SSP > Detect eye blinks. It automatically opens the pipeline editor, with the process "Detect eye blinks" selected:

sspMenu.gif detectEog.gif

Click on Run. After the process stops, you can see two new event categories "blink" and "blink2" in the Record tab. You can review a few of them, to make sure that they really indicate the EOG events. In the Record tab, click on the "blink" event category, then on a time occurrence to jump to it in the MEG and Misc time series figures.

Two types of events are created because this algorithm not only detects specific events in a signal, it also classifies them by shape. If you go through all the events that were detected in the two categories, you would see that the "blink" are all round bumps, typical of the eye blinks. In the category "blink2", the morphologies don't look as uniform; it mixes small blinks, and ramps or step functions followed by sharp drops that could indicate eye saccades. The saccades can be observed on the vertical EOG, but if you want a better characterization of them you should also record the horizontal EOG. The detection of the saccades should be performed with a different set of parameters, using the process "Detect custom events", introduced later in this chapter.

detectEogDone.gif

Automatic detection: ECG

Now do the same thing for the heartbeats. In the Record tab, select the menu "SSP > Detect heartbeats". Configure the process to use the channel EEG057 (name of the ECG channel), and leave the other options to the default values.

sspMenu1.gif detectEcg.gif

Click on Run. After the process stops, you can see a new event category "cardiac" in the Record tab, with 346 occurrences. You can check a few of them, to make sure that the "cardiac" markers really indicate the ECG peaks, and that there are not too many peaks that are skipped.

detectEcgDone.gif

Automatic detection: Custom

These two previous processes are shortcuts for a generic process "Detect custom events". We are not going to use it here, but it is interesting to introduce it to understand how the blinks and heartbeats detection work. The logic is the following:

detectCustom.gif

Calculate the projectors

Methodology

We have now 346 examples of heartbeats, 18 examples of blinks, and 12 examples of saccades. These are sufficient numbers of repetitions to calculate reliable signal-space projections. The logic of the computation is the following:

  1. Take a small time window around each marker to capture the full effect of the artifact, plus some clean brain signals before and after. The default time window is [-200,+200]ms for eye blinks, and [-40,+40]ms for the heartbeats.
  2. Filter the MEG or EEG signals in a frequency band of interest, in which the artifact is the most visible.
  3. Concatenate all these time blocks into a big matrix A = [b''1'', ..., b''m'']

  4. Compute the singular value decomposition of this matrix A: [U,S,V] = svd(A, 'econ')

  5. The singular vectors Ui with the highest singular values Si are an orthonormal basis of the artifact subspace that we want to subtract from the recordings. The software selects by default the first vector only but it is possible to redefine interactively the selected components.

  6. Calculate the projection operator: P⊥i = I - UiUiT

  7. Apply this projection on the MEG or EEG recordings F: F = P⊥iF

  8. The process has to be repeated separately several times:

    • for each sensor type (EEG, MEG magnetometers, MEG gradiometers) and
    • for each artifact, starting with the one that leads to the strongest contamination in amplitude on the sensors (in the case: the eye blinks)
    • If you have difficulties removing one artifact or the other, you may try to process them in a different order. You may also try removing some of the artifact markers in the case of co-occurring artifacts. If a lot of the blinks happen at the same time as heartbeats, you may end up calculating projectors that mix both effects, but that do not remove efficiently one or the other. In this case, remove all the markers that happen in the segments contaminated by multiple artifacts.

Steps #1 to #5 are automatically done by the processes "SSP > Compute SSP" in the Record tab: the results, the vectors Ui, are saved in the database link ("Link to raw file") and in the channel file.

Steps #6 and #7 are calculated on the fly when reading a block of recordings from the continuous file: when using the raw viewer, running a process a process on the continuous file, or importing epochs in the database.

Step #8 is the manual control of the process. Take some time to understand what you are trying to remove and how to do it. Never trust blindly any fully automated artifact cleaning algorithm, always check manually what is removed from the recordings, and do not give up if the first results are not satisfying.

We have multiple types of artifact to correct for and it can be tricky to know what artifact to correct first. If you correct for artifact A and then for artifact B, you may get different results than if you correct first B and then A. We propose here a default order for the two artifacts that we have in all the recordings: the blinks and the heartbeats.

If two artifacts happen at the same time, the SSP projectors calculated for the blink may contain some of the heartbeat and vice versa. When trying to remove the second artifact, we might not be able to isolate it clearly anymore. To avoid this behavior, we should try to avoid having overlapping artifacts. Because the subject's heart is beating at least once per second, there is a high chance to record a heartbeat next to any eye blink. Therefore a significant number of the blinks will be contaminated with heartbeats. If we decide to remove all the blink events that include some heart activity, we may not have enough blink samples left to build correct SSP projectors. To isolate correctly these two common contaminations we recommend the following procedure:

  1. Remove all the cardiac peaks that are occurring next to a blink (we can remove half of the heartbeats, we will still have enough data for calculating the projectors).
  2. Calculate the SSP to remove the cardiac artifact.

  3. Calculate the SSP to remove the eye blinks, based on the recordings already corrected for the cardiac artifact.

Remove simultaneous events

If you closed the file viewer, open the MEG recordings again:

In the Record tab, select the menu "SSP > Remove simultaneous". Set the options:

sspMenuRemove.gif removeSimult.gif

After executing this process, the number of "cardiac" events goes from 346 to 333. The deleted heartbeats were all less than 250ms away from a blink.

Compute SSP: Heartbeats

In the Record tab, select the menu "SSP > Compute SSP: Heartbeats". Set the options:

sspEogFile.gif sspEcg.gif

After the computation is done, a new figure is displayed, that lets you select the active projectors. On the left you have the projector categories (matrix U) where each row represents the result of an execution of a "Compute SSP " process (usually one for each sensor type and each artifact). On the right, you can activate independently the different components for the selected projector category. Each entry represents the projector created from a column vector Ui (singular vector of the SVD decomposition), and the percentage is the singular value for this vector, normalized for this decomposition (percentage = Si / sum(Si)). When a projector is selected, it means that it is applied to the recordings on the fly when reading from the continuous file, ie. that the corresponding spatial component is removed from the recordings.

The percentage indicates the amount of signal that was captured by this projector during the decomposition. The highest it is, the more the projector is representative of the artifact recordings that were used to calculate it. In the good cases, you would typically see one to three components with values that are significantly higher that the others.

By default, the software selects the first component and leaves all the others unselected. Do not trust this automatic selection, you should always review manually the components that you want to remove.

A list of buttons is available to manipulate the projector categories: load, save, rename, delete, and display component topography. The last one is one of the most useful tool to understand these results. Select the category ("cardiac"), click on the first component, then click on the toolbar button to display its topography. Do it again for components #2 and #3 (the components don't need to be active for that).

The other button [No interp] displays the same results without the magnetic field smoothing that is applied by default to MEG topographies. It may help understand some difficult cases.

sspEcgCheck.gif

sspEcgComp.gif

None of these topographies can be clearly related to a cardiac component. This can have two interpretations: the cardiac artifact is not very strong for this subject and the influence of the heart activity over the MEG sensors is completely buried in the noise or the brain signals, or the characterization of the artifact that we did was not so good and we should refine it by selecting for instance smaller time windows around the cardiac peaks. Here, it's probably due to the subject's morphology. Some people generate strong artifacts in the MEG, some don't.

In this case, it is safer to unselect this "cardiac" category, rather than removing randomly spatial components that we are not sure to identify.

Let's try the same thing with the eye blinks. Select the process "SSP > Compute SSP: Eye blinks". Run it on the event type "blink", that indicate the peaks of the EOG signal.

sspEogFile.gif sspEog.gif

In this specific case, there is one value that is much higher than the others (21%). The first component is most likely a good representation of the artifact we are trying to remove.

sspEogSelect.gif

Select the cardiac category, and display the topographies for the first three components. Component #1 is clearly related with the ocular activity occurring during the blinks, the others are not as clear. Conclusion: the default proposition was good (only component #1 selected). Setting this first projector as active removes from the MEG signals the contributions related with the spatial topography we can see in the first figure, ie. a spatial distribution of sensor values related specifically to the eye blinks.

sspEogComp.gif

While the recordings are displayed in the file viewer, the selection of the active projectors interactively updates the recordings. Go to the first detected "blink" event (t=33.600s), and select the left frontal sensors (Shift+B or right-click > Display setup > CTF LF). Then play with the checkboxes to change the active projectors, and see how well this first component of the blink category removes the ocular artifact.

sspEogMeg.gif

Select the projector category "blink", select the first component, and click on Save, to validate your changes. Starting from now, all the successive operations you apply on this file will integrate this eye blink correction, including the calculation of other projectors. After you closed this window, you can get back to it by selecting the menu "SSP > Select active projector" in the Record tab, and change again the selection of active projectors.

We are not going to calculate the SSP for the other category of events identified on the EOG, the group "blink2". The events classified in this group could represent eye saccades or other types of blinks, it is not very clear that they correspond to the same physiological event. Most of them do not seem to create any obvious contamination on the MEG recordings, there is probably no need to consider them now.

Compute SSP : Generic

The calculation of the SSP for the heartbeats and the eye blinks are shortcuts to a more generic algorithm "Compute SSP: Generic". We do not need it here, but the illustration of its properties is interesting to understand the correction of ocular and cardiac artifacts:

sspGeneric.gif

Troubleshooting

You have calculated the SSP as indicated here but you don't get any good results. No matter what you do, the topographies don't look like the targeted artifact. You can try the following:

Evaluation

One efficient way of representing the impact of this artifact correction is to epoch the recordings around the artifacts before and after the correction. Use the same time window as the one used in the SSP options around each marker, and average all the segments of recordings. These operations are detailed in the next tutorial, we are just presenting the results. You don't need to reproduce them at this time, just remember that it is doable, in case you need it with your own data. This is what the database would look like after these operations:

blinkDb.gif

The following image represents the file "blink_uncorrected / Avg: blink", which is the average of the 18 identified blinks before the SSP correction, [-200,+200]ms around the "blink" events. The time series, the 2D topography at the peak (t = 0ms), and the mapping on the cortex of these fields, using the basic inverse model calculated in the previous tutorial.

blinkBefore.gif

The next image represents the file "blink_ssp / Avg: blink", which is the exact same thing, but after the SSP correction. The artifact is gone. If you do this and you can still see some clear evoked component in the time series figure, the SSP correction was not efficient: the artifact is not properly identified, or you should select different components using the "Select active projectors" window.

blinkAfter.gif

Heartbeats

We can do the same thing with the heartbeats: epoch [-40,+40]ms around the "cardiac" events, average the trials, and review the event-related fields at three instants:

The peak of the P wave (t = -30ms):

cardiacP.gif

The peak of the QRS complex (t = 0ms):

cardiacQRS.gif

The peak of the T wave (t = 25ms):

cardiacT.gif

These peaks may look big, but they are in fact much smaller (300 fT for the average of 346 repetitions) than what we observed for the eye blinks (1500 fT for 18 repetitions). You can notice that none of these topographies are similar to the components we obtained after calculating the SSP for the cardiac artifact. We don't know how to correct this artifact, it doesn't look too bad in terms of recordings contamination, so we just leave it uncorrected.

C3. Epoching and averaging

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

This tutorial fills the gap between the previous tutorial (review and clean continuous recordings), and the introduction tutorials (source analysis of evoked responses). It explains how to epoch the recordings, do some more pre-processing on the single trials, and then calculate their average. For this tutorial, we are going to use the protocol TutorialRaw created in the two previous tutorials: Review continuous recordings and edit markers and Artifact cleaning. If you have not followed these tutorials yet, please do it now.

Import in database

The raw file viewer provides a rapid access to the recordings, but most of operations cannot be performed directly on the continuous files: most of the pre-processing functions, averaging, time-frequency analysis and statistical tests can only be applied on blocks of data that are saved in the database (ie. "imported"). After reviewing the recordings and editing the event markers, you have to "import" the recordings in the database to go with further analysis.

Right-click on the file with power line correction: Raw | notch(60Hz 120Hz 180Hz) > Import in dabase

importMenu.gif

Warning: If you right-click on the subject node > Import EEG/MEG, and select the tutorial .ds dataset, you would be able to import blocks of the continuous file in the database, but you would not have access to the modified events list or the SSP operators. Therefore, you would not import data cleaned of ocular and cardiac artifacts. The modified events list and the signal space projectors are saved only in the "Link to raw file" in the Brainstorm database, not in the initial continuous file.

importOptions.gif

Set the import options as they are represented in this figure:

Click on Import and wait. At the end, you are asked whether you want to ignore one epoch that is shorter than the others. This happens because the acquisition of the MEG signals was stopped less than 300ms after the last stimulus trigger was sent. Therefore, the last epoch cannot have the full [-100,300]ms time definition. This shorter epoch would prevent us from averaging all the trials easily. As we already have enough repetitions in this experiment, we can just ignore it. Answer Yes to this question to discard the last epoch.

importShortEpoch.gif

Two new conditions containing two groups of trials appeared in the database. To expand a group of trials and get access to the individual epochs: double-click on it or click on the "+" next to it.

The SSP projectors calculated in the previous tutorial were applied on the fly when reading from the continuous file. These epochs are clean from eye blinks and power line contamination.

All the files available in the (Common files) folder are also shared for these new folders "left" and "right".

Review the individual trials

Double-click on the first trial for the "left" condition. Then right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3 to jump to the next trial. This way you can quickly review all the trials, to make sure that there is no obvious problem in the recordings. If you haven't reviewed manually all the recordings in the continuous mode, and marked all the bad segments, it is a good time to do it now.

To mark a trial as bad manually, you have three methods:

rejectManual.gif

When a trial is tagged as bad, its icon in the database explorer shows a red mark.

rejectTree.gif

All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs. If you drag and drop the 101 left trials in the Process1 list, with one trial marked as bad, the summary of the selected files on top of the tab would show only 100 data files.

rejectProcess.gif

To learn how to mark only individual channels as bad instead of the entire trial, please go back to the introduction tutorial Exploring the recordings.

The process "Artifacts > Detect bad channels: peak-to-peak" can help you detect some bad trials based on a peak-to-peak amplitude threshold. Drag and drop all the trials from the left condition in the Process1 list, and select this process. For each trial, it detects the channels that have a peak-to-peak amplitude (maximum value - minimum value) that is above a rejection threshold (too noisy) or below a rejection threshold (no signal). You can define different thresholds for different channel types. The options are:

rejectOptions.gif

For this current dataset, the quality of the recordings is remarkably good, we don't need to mark any trial as bad. So right-click on the group of "left" trials > Accept trials, to set them all as good.

Averaging

Drag and drop the two lists of trials (or the two condition folders) in the Process1 list, and run the process "Average > Average files", with the option "Average by condition (subject average)". This will group the trials by conditions, and calculate one average per condition and subject. The option "By condition (grand average)" would average each condition for all the subjects together, but as there is only one subject in this case, the result would be same. The other options would generate one average file per subject ("by subject"), one average file per group of trials and per subject ("by trial group"), or only one average no matter what the input is ("everything").

The function to apply is the regular arithmetic average. The option "Keep all the events from the individual epochs" would group all the event markers present in all the epochs and save them in the new averaged file. It can be useful to check the relative position of the artifacts or the subject responses, or quickly detect some unwanted configuration such as a subject who would constantly blink right after a visual stimulus. We don't really need this here, leave this option unselected.

avgOptions.gif

It creates two files: "Avg: left" and "Avg: right".

avgDb.gif

Double-click on the "Avg: Left" file to display the MEG recordings for this file (or right-click > MEG > display time series). It shows a very typical an clean evoked response, with a very high signal-to-noise ratio.

avgTs.gif

Just for the records, this is how it would look like if we had selected the option "Keep all the events from the individual epochs". The summary of the markers of all the epochs is: 37 heartbeats and 2 blinks.

avgTsEvents.gif

Stimulation artifact

Now zoom around 4ms in time (mouse wheel, or two figures up/down on macbooks) and amplitude (control + zoom). Notice this very strong and sharp peak followed by fast oscillations. This is an artifact due to the electric stimulation device. In the stimulation setup: the stimulus trigger is initiated by the stimulation computer and sent to the electric stimulator. This stimulator generates an electric pulse that is sent to electrodes on the subject's wrists. This electric current flows in the wires and the in the body, so it also generates a small magnetic field that is captured by the MEG sensors. This is what we observe here at 4ms.

This means that whenever we decide to send an electric stimulus, there is a 4ms delay before the stimulus is actually received by the subject, due to all the electronics in between. Which means that everything is shifted by 4ms in the recordings. These hardware delays are unavoidable and should be quantified precisely before starting scanning subjects or patients.

avgStim.gif

Then you have two options: either you remember it and subtract the delays when you publish your results (there is a risk of forgetting about them), or you fix the files right now by fixing the time reference in all the files (there is a risk of forgetting to fix all the subjects/runs in the same way). Let's illustrate this second method now.

Close all the figures, clear the Process1 list (right-click > clear, or select+delete key), and drag and drop all the trials and all the averages (or simply the two left and right condition folders). Select the process "Pre-process > Add time offset". Set the time offset to -4.2 ms, to bring back this stimulation peak at 0ms. Select also the "Overwrite input files" option, to avoid too many duplications of unnecessary files in the database.

offsetOptions.gif

This fixes the individual trials and the averages. Double-click on the "Avg: left" file again to observe that the stimulation artifact is now occurring at 0ms exactly, which means that t=0s represents the time when the electric pulse is received by the subject.

offsetDone.gif

Filters for visualization

As introduced in the previous tutorials, you can add a visualization filter to make the traces and the topographies look smoother. We suggest in this case a low-pass filter at 120Hz, because it shows very smooth traces, but all the waves we are interested in are still clearly visible in the MEG time series figure.

onlineFilter.gif

Explore the average

Open the time series for the "Avg: left". Then press Control+T, to see on the side a spatial topography at the current time point. Then observe what is happening between 0ms and 100ms. Start at 0ms and play it like a movie using the arrow keys left and right, to follow the brain activity millisecond by millisecond:

avg16.gif avg30.gif

avg60.gif avg70.gif

Source analysis

Let's reproduce the same observations at the source level. The concepts related with the source estimation are not discussed here; for more information, refer to the introduction tutorials #6 to #8.

First, delete all the files related with the source estimation calculated in the previous tutorials, available in the (Common files) folder: the head model, the noise covariance and inverse model. We can now provide a better estimate of the noise (affects the inverse model), and we defined new SSP operators (affects the head model).

Head model

Right-click on any node that contains the channel file (including the channel file itself), and select: "Compute head model". Leave all the default options: cortex source space, and overlapping spheres. The lead field matrix is saved in file "Overlapping spheres" in (Common files).

forward.gif

Noise covariance matrix

To estimate the sources properly, we need an estimation of the noise level for each sensor. A good way to do this is to compute the covariance matrix of the concatenation of the baselines from all the trials in both conditions.

noisecov.gif

Inverse model

Right-click on (Common files), on the head model or on the subject node, and select "Compute sources". A shared inversion kernel is created in (Common files); a link node is now visible for each recordings file, single trials and averages. For more information about what these links mean and the operations performed to display them, please refer to the tutorial #8 "Source estimation".

inverseDb.gif

Explore the sources

Right-click on the sources for the left average > Cortical activations > Display on cortex, or simply double click on the file. Go to the main response peak at t = 30ms, and increase the amplitude threshold to 100%. You see a strong activity around the right primary somatosensory cortex, but there are still lots of brain areas that are shown in plain red (value >= 100% maximum of the colorbar ~= 280 pA.m).

inverse100.gif

The colorbar maximum is set to an estimation of the maximum source amplitude over the time. This estimation is done by finding the time instant with the highest global field power on the sensors (green trace GFP), estimating the sources for this time only, and then taking the maximum source value at this time point. It is a very fast estimate, but not very reliable; we use it because calculating the full source matrix (all the time points x all the sources) just for finding the maximum value would be too long.

In this case, the real maximum is probably higher than what is used by default. To redefine the colorbar maximum: right-click on the 3D figure > Colormap: sources > Set colorbar max value. Set the maximum to 480 pA.m, or any other value that would lead to see just one very focal point on the brain at 30ms.

inverse480.gif

Go back to the first small peak at t=16ms, and lower the threshold to 10%. Then do what you did with at the sensor level: follow the information processing in the brain until 100ms, millisecond by millisecond, adapting the threshold and the camera position when needed:

sources16.gif sources30.gif

sources60.gif sources70.gif

Scripting

The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: brainstorm3/toolbox/script/tutorial_raw.m

1 function tutorial_raw(tutorial_dir) 2 % TUTORIAL_RAW: Script that reproduces the results of the online tutorials "MEG median nerve (Yokogawa)". 3 % 4 % DESCRIPTION: 5 % It is based on a median nerve stimulation experiment recorded at the Montreal Neurological Institute in 2011 6 % with a CTF MEG 275 system. The sample dataset contains 6 minutes of recordings at 1200Hz for one subject 7 % and includes 100 stimulations of each arm. 8 % 9 % CORRESPONDING ONLINE TUTORIALS: 10 % https://neuroimage.usc.edu/brainstorm/Tutorials/MedianNerveCtf 11 % 12 % INPUTS: 13 % tutorial_dir: Directory where the sample_raw.zip file has been unzipped 14 15 % @============================================================================= 16 % This function is part of the Brainstorm software: 17 % https://neuroimage.usc.edu/brainstorm 18 % 19 % Copyright (c) University of Southern California & McGill University 20 % This software is distributed under the terms of the GNU General Public License 21 % as published by the Free Software Foundation. Further details on the GPLv3 22 % license can be found at http://www.gnu.org/copyleft/gpl.html. 23 % 24 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 25 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 26 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 27 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 28 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 29 % 30 % For more information type "brainstorm license" at command prompt. 31 % =============================================================================@ 32 % 33 % Author: Francois Tadel, 2013-2016 34 35 36 % ===== FILES TO IMPORT ===== 37 % You have to specify the folder in which the tutorial dataset is unzipped 38 if (nargin == 0) || isempty(tutorial_dir) || ~file_exist(tutorial_dir) 39 error('The first argument must be the full path to the tutorial dataset folder.'); 40 end 41 % Build the path of the files to import 42 AnatDir = fullfile(tutorial_dir, 'sample_raw', 'Anatomy'); 43 RawFile = fullfile(tutorial_dir, 'sample_raw', 'Data', 'subj001_somatosensory_20111109_01_AUX-f.ds'); 44 % Check if the folder contains the required files 45 if ~file_exist(RawFile) 46 error(['The folder ' tutorial_dir ' does not contain the folder from the file sample_raw.zip.']); 47 end 48 49 % ===== CREATE PROTOCOL ===== 50 % The protocol name has to be a valid folder name (no spaces, no weird characters...) 51 ProtocolName = 'TutorialRaw'; 52 % Start brainstorm without the GUI 53 if ~brainstorm('status') 54 brainstorm nogui 55 end 56 % Delete existing protocol 57 gui_brainstorm('DeleteProtocol', ProtocolName); 58 % Create new protocol 59 gui_brainstorm('CreateProtocol', ProtocolName, 0, 0); 60 % Start a new report 61 bst_report('Start'); 62 63 64 % ===== ANATOMY ===== 65 % Subject name 66 SubjectName = 'Subject01'; 67 % Process: Import anatomy folder 68 bst_process('CallProcess', 'process_import_anatomy', [], [], ... 69 'subjectname', SubjectName, ... 70 'mrifile', {AnatDir, 'FreeSurfer'}, ... 71 'nvertices', 15000, ... 72 'nas', [127, 212, 123], ... 73 'lpa', [ 55, 124, 119], ... 74 'rpa', [200, 129, 114]); 75 76 % ===== LINK CONTINUOUS FILE ===== 77 % Process: Create link to raw file 78 sFilesRaw = bst_process('CallProcess', 'process_import_data_raw', [], [], ... 79 'subjectname', SubjectName, ... 80 'datafile', {RawFile, 'CTF'}, ... 81 'channelreplace', 1, ... 82 'channelalign', 1); 83 % Process: Snapshot: Sensors/MRI registration 84 bst_process('CallProcess', 'process_snapshot', sFilesRaw, [], ... 85 'target', 1, ... % Sensors/MRI registration 86 'modality', 1, ... % MEG (All) 87 'orient', 1, ... % left 88 'Comment', 'MEG/MRI Registration'); 89 90 91 % ===== REMOVE 60/180/240 Hz ===== 92 % Process: Notch filter: 60Hz 120Hz 180Hz 93 sFilesClean = bst_process('CallProcess', 'process_notch', sFilesRaw, [], ... 94 'freqlist', [60, 120, 180], ... 95 'sensortypes', 'MEG', ... 96 'read_all', 0); 97 98 % Process: Power spectrum density (Welch) 99 sFilesPsd = bst_process('CallProcess', 'process_psd', [sFilesRaw, sFilesClean], [], ... 100 'timewindow', [0, 50], ... 101 'win_length', 4, ... 102 'win_overlap', 50, ... 103 'clusters', {}, ... 104 'sensortypes', 'MEG', ... 105 'edit', struct(... 106 'Comment', 'Power', ... 107 'TimeBands', [], ... 108 'Freqs', [], ... 109 'ClusterFuncTime', 'none', ... 110 'Measure', 'power', ... 111 'Output', 'all', ... 112 'SaveKernel', 0)); 113 114 % Process: Snapshot: Frequency spectrum 115 bst_process('CallProcess', 'process_snapshot', sFilesPsd, [], ... 116 'target', 10, ... % Frequency spectrum 117 'modality', 1, ... % MEG (All) 118 'Comment', 'Power spectrum density'); 119 120 121 % ===== CORRECT BLINKS AND HEARTBEATS ===== 122 % Process: Detect heartbeats 123 sFilesClean = bst_process('CallProcess', 'process_evt_detect_ecg', sFilesClean, [], ... 124 'channelname', 'EEG057', ... 125 'timewindow', [], ... 126 'eventname', 'cardiac'); 127 128 % Process: Detect eye blinks 129 sFilesClean = bst_process('CallProcess', 'process_evt_detect_eog', sFilesClean, [], ... 130 'channelname', 'EEG058', ... 131 'timewindow', [], ... 132 'eventname', 'blink'); 133 134 % Process: Remove simultaneous 135 sFilesClean = bst_process('CallProcess', 'process_evt_remove_simult', sFilesClean, [], ... 136 'remove', 'cardiac', ... 137 'target', 'blink', ... 138 'dt', 0.25, ... 139 'rename', 0); 140 141 % % Process: SSP ECG: cardiac 142 % sFilesClean = bst_process('CallProcess', 'process_ssp_ecg', sFilesClean, [], ... 143 % 'eventname', 'cardiac', ... 144 % 'sensortypes', 'MEG', ... 145 % 'usessp', 1, ... 146 % 'select', []); % Force selection of some components 147 148 % Process: SSP EOG: blink 149 sFilesClean = bst_process('CallProcess', 'process_ssp_eog', sFilesClean, [], ... 150 'eventname', 'blink', ... 151 'sensortypes', 'MEG', ... 152 'usessp', 1, ... 153 'select', 1); % Force selection of some components 154 155 % Process: Snapshot: SSP projectors 156 bst_process('CallProcess', 'process_snapshot', sFilesClean, [], ... 157 'target', 2, ... % SSP projectors 158 'Comment', 'SSP projectors'); 159 160 161 % ===== IMPORT EVENTS ===== 162 % Process: Import MEG/EEG: Events 163 sFilesEpochs = bst_process('CallProcess', 'process_import_data_event', sFilesClean, [], ... 164 'subjectname', SubjectName, ... 165 'condition', '', ... 166 'eventname', 'left, right', ... 167 'timewindow', [], ... 168 'epochtime', [-0.1, 0.3], ... 169 'createcond', 0, ... 170 'ignoreshort', 1, ... 171 'usectfcomp', 1, ... 172 'usessp', 1, ... 173 'freq', [], ... 174 'baseline', [-0.1, 0]); 175 176 % Process: Add time offset: -4.20ms 177 sFilesEpochs = bst_process('CallProcess', 'process_timeoffset', sFilesEpochs, [], ... 178 'offset', -0.0042, ... 179 'overwrite', 1); 180 181 % Process: Average: By condition (subject average) 182 sFilesAvg = bst_process('CallProcess', 'process_average', sFilesEpochs, [], ... 183 'avgtype', 6, ... % By trial groups (subject average) 184 'avg_func', 1, ... % Arithmetic average: mean(x) 185 'keepevents', 0); 186 187 % Process: Snapshot: Recordings time series 188 bst_process('CallProcess', 'process_snapshot', sFilesAvg, [], ... 189 'target', 5, ... % Recordings time series 190 'modality', 1, ... % MEG (All) 191 'Comment', 'Evoked response'); 192 193 194 % ===== SOURCE MODELING ===== 195 % Process: Compute head model 196 bst_process('CallProcess', 'process_headmodel', sFilesAvg, [], ... 197 'comment', '', ... 198 'sourcespace', 1, ... 199 'meg', 3); % Overlapping spheres 200 201 % Process: Compute noise covariance 202 bst_process('CallProcess', 'process_noisecov', sFilesEpochs, [], ... 203 'baseline', [-0.104, -0.005], ... 204 'sensortypes', 'MEG, EEG', ... 205 'target', 1, ... % Noise covariance (covariance over baseline time window) 206 'dcoffset', 1, ... 207 'identity', 0, ... 208 'copycond', 0, ... 209 'copysubj', 0, ... 210 'replacefile', 1); % Replace 211 212 % Process: Snapshot: Noise covariance 213 bst_process('CallProcess', 'process_snapshot', sFilesAvg, [], ... 214 'target', 3, ... % Noise covariance 215 'Comment', 'Noise covariance'); 216 217 % Process: Compute sources 218 sFilesSrcAvg = bst_process('CallProcess', 'process_inverse', sFilesAvg, [], ... 219 'comment', '', ... 220 'method', 2, ... % dSPM 221 'wmne', struct(... 222 'NoiseCov', [], ... 223 'InverseMethod', 'dspm', ... 224 'ChannelTypes', {{}}, ... 225 'SNR', 3, ... 226 'diagnoise', 0, ... 227 'SourceOrient', {{'fixed'}}, ... 228 'loose', 0.2, ... 229 'depth', 1, ... 230 'weightexp', 0.5, ... 231 'weightlimit', 10, ... 232 'regnoise', 1, ... 233 'magreg', 0.1, ... 234 'gradreg', 0.1, ... 235 'eegreg', 0.1, ... 236 'ecogreg', 0.1, ... 237 'seegreg', 0.1, ... 238 'fMRI', [], ... 239 'fMRIthresh', [], ... 240 'fMRIoff', 0.1, ... 241 'pca', 1), ... 242 'sensortypes', 'MEG, MEG MAG, MEG GRAD, EEG', ... 243 'output', 1); % Kernel only: shared 244 245 % Process: Snapshot: Sources (one time) 246 bst_process('CallProcess', 'process_snapshot', sFilesSrcAvg, [], ... 247 'target', 8, ... % Sources (one time) 248 'modality', 1, ... % MEG (All) 249 'orient', 3, ... % top 250 'time', 0.035, ... 251 'Comment', 'Source maps at 35ms'); 252 253 254 255 % Save and display report 256 ReportFile = bst_report('Save', sFilesSrcAvg); 257 bst_report('Open', ReportFile); 258 259 260

Tutorials/MedianNerveCtf (last edited 2016-08-01 16:24:31 by FrancoisTadel)