Tutorial - Head Motion

All efforts should be made to avoid any movements and in particular head movements during a MEG recording, as it can cause various issues such as blurring of signals and loss of amplitude, mis-localization of source activity, and possibly motion artefacts. Yet it is important to evaluate and account for any head motion at the time of analysis. This is possible because the positions of the head tracking coils are saved in channels along the MEG data. It is important to note however that most analysis software, including Brainstorm, assume a single fixed position for the head for most computations. This "reference" position is the one that is measured just before the recording starts and that is saved separately from the continuous head localization channels.

This tutorial will explore different options on how to deal with head motion, using the sample_omega.zip dataset available on the download page. See MEG resting state & OMEGA database (CTF) for details on the BIDS specifications and how to load a MEG-BIDS dataset in Brainstorm. Note that importing all the subjects can take a while; for this tutorial, you can only import sub-0007.

Motion visualization

First, in order to evaluate the level of motion, we will display a special montage based on the head localization coils. This calculates the distance at each point in time with respect to the initial/reference head position. That distance is based on a sphere (approximating the head) attached to these head coils, thus accounting for all types of motion of all parts of the head equally.

Adjust the reference head position

Given that the default reference position is measured at the very beginning of the recording, we can improve co-registration by replacing it with one that better represents the position throughout.

[screenshot]

The figures don't show much difference for this recording, but we can notice a small drop looking at the LPA point and the head point near the back edge of the helmet. A message in the Matlab command window also tells us that the position changed by 3.7 mm.

[screenshot]

[screenshot]

Mark head motion events

For long recordings, such as is typical for epilepsy, or when a large movement is present, a single head position may not be appropriate for the entire duration. In these cases, it may be preferable to split the recording in smaller segments where the head is stable and reject segments where too much motion is present. To do this, we first need to detect and mark these segments. Note that this process does not depend in any way on the reference position, so there is no need to adjust it first as we did.

There is not much motion in this dataset, so we will use extremely conservative thresholds to illustrate this process.

[screenshot]

[screenshot]

In this first half of the dataset, we see one bad segment and three stable segments that correspond well with the jumps in the distance time series. We can also see these two new event groups in the Record tab. Note that it took a few tries with different thresholds and minimum durations to get the segments to fit that well.

Split the recording based on head motion events

We now can proceed to splitting our raw recording into segments with a stable head position.

[screenshot]

A new condition appears in the database explorer for each extracted segment.

[screenshot]

We now should correct the reference position for each new raw file, otherwise the splitting would not achieve much.

Nothing seems to happen and the command window informs us that the head position was already adjusted. We must therefore undo the previous global adjustment (Remove head position adjustment) and apply it again to have an adjustment adapted to each segment. This can be done in one go by adding the Adjust head position process twice to our pipeline.

[screenshot]

[screenshot]

We now see the head distance stays below half a mm until the end of the segment where it jumps to about 1.5 mm, close to our threshold. You can check that the other segments have similarly low remaining motion.

Head motion correction with SSS [TO DO]

Signal Space Separation (SSS) is a data cleaning technique that uses a spherical harmonic decomposition of the MEG signal to separate it into components originating from inside or outside the sensor shell. By reconstructing the signal with only the "inside" components, environmental interference is reduced.

The same spherical harmonic expansion can be used to correct for head motion by reconstructing the signal with new sensor locations adapted to the head position at each instant. (In the head coordinate system, head motion translates to motion of the sensor array.) In other words, we correct for head motion by interpolating the field at new sensor locations, using spherical harmonics for the interpolation.

The SSS process can therefore be used both for motion correction and signal cleaning, but here we will use it only for motion correction. In that case, the process takes care of selecting the expansion orders for inside and outside harmonics, using in total as many harmonics as sensors.

Tutorials/HeadMotion (last edited 2018-11-20 18:12:48 by ?MarcLalancette)