Co-registration MEG run reference

Hi,
is some of you here got a reference paper for the co-registration MEG run method? i'm just curious on how this is working.

(Method 3: Co-register properly the runs using the process Standardize > Co-register MEG runs: Can be a good option for displacements under 2cm.
Warning: This method has not be been fully evaluated on our side, use at your own risk. Also, it does not work correctly if you have different SSP projectors calculated for multiple runs.)

Thank you!

The method used here is the same as one used for creating the MEG 2D/3D topography views in Brainstorm:
http://neuroimage.usc.edu/brainstorm/Tutorials/ExploreRecordings#Magnetic_interpolation

This method is not published, and was never properly evaluated. The best documentation you can find is directly the code:
https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/process/functions/process_megreg.m#L201
https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/sensors/channel_extrapm.m

Elekta MaxFilter does something similar to coregister multiple MEG runs on the same head position. If you use an Elekta MEG, you can ask them about this procedure.

Francois

1 Like

Hello,

Just posting it here as it seems like the most recent discussion on this topic.

I have multiple multiple MEG runs per subject (data from Elekta scanner). I was wondering what the best way to co-register these runs is -

Option 1)

  • Preprocess individual runs
  • Estimate SSP
  • Import trials (trials are different, each trial is about 60sec long)
  • Co-register runs

Option 2)

  • Co-register runs before any of the other steps

If I understand this process correctly, with option (2) I will have shared SSP over multiple runs, which is not very accurate. Option (1) seems time consuming but probably better?
Please let me know.

Thanks

I was wondering what the best way to co-register these runs is

I don't recommend you use the process "Standardize > Co-register MEG runs" in Brainstorm for Elekta recordings.
Use Elekta MaxFilter for preprocessing and coregistering the runs, as recommended by the company.

Note that co-registering the runs is not mandatory. But then you need to average the different runs at the source level rather than at the sensor level.

If I understand this process correctly, with option (2) I will have shared SSP over multiple runs, which is not very accurate. Option (1) seems time consuming but probably better?

No matter if you co-register the runs or not, use the option "Default channel file: No, use one channel file per acquisition run". Then estimate the SSP individually for each run.

Thanks for the prompt reply Francois!

The data have already been preprocessed using temporal signal space separation (tsss), but the co-registration function with MaxFilter hasn't worked well. So I was just exploring other options.
I want to analyse the data as one long continuous file per participant.
Would you then suggest, processing each run and then joining the runs at source level?

Thanks!

It depends what you are doing, and how much the subject moved during a run and between runs.
If there is very little movement between runs, maybe you can even consider there is no movement between runs. If it seems to be significant, then you need to take this into account and avoid considering that a given sensor records the same brain region in two runs.
https://neuroimage.usc.edu/brainstorm/Tutorials/Averaging#Averaging_across_runs

In source space, the differences in head positions are compensated: you can safely concatenate or average data from different runs.
If you are planning to do source analysis on continuous recordings, you will probably not be able to the signals for 15000 sources, you may need to go for an ROI analysis.

Hi Francois,

I have now source localised data for multiple runs per participant. I want to concatenate the source localised data to make one long continuous recording (per participant).

I put the dspm files in process 1 (for one participant) and go to

Run > Standardise > Concatenate time. However this option is not available (see screenshot).
Screenshot 2020-04-30 at 12.32.40

Am I doing something wrong here?

Thanks

The concatenation in time is not available for source files, because of the complexity of handling the optimized data storage in these files. Also, it is usually not needed because of the linearity of the minimum norm solutions: Concatenate the recordings and use the source link for this concatenated recordings instead.

If for some reason you do need to concatenate post-processed source results, you can extract the scouts time series and concatenate these instead.

Thanks for the quick reply and the suggestion on using scout time series.

Re concatenating source link are the following steps correct?

  • Import the epochs/trials in database (pre-processed, SSP applied etc)
  • Concatenate these
  • Estimate head models for new continuous data
  • Estimate sources for new continuous data

Thanks

Why do you epoch (=cut in pieces) and then re-concatenate?
Why do you need to concatenate these signals in the first place?

Estimating sources of short or long segments is exactly the same. Both the forward modeling and the the minimum estimation are computed independently from the MEG recordings.
https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Computing_sources_for_single_trials

The data are from a listening experiment, with some breaks in the experimental block.
I want to get rid of the breaks (which are of varying duration), so that the data can be evaluated as a single recording (like a long resting state recording). I have imported the epochs based on trigger information and now I would like to concatenate them.
Typical epoch is 1-2min in duration and there are about 5 of them per block. Each epoch is different so I cant average them in source space either.
Hope that makes more sense?

Please let me know if you have any suggestions.

Thanks

If you have blocks of 1-2 min, this is long enough for getting stable estimates of spectral distribution or connectivity. You could compute the measure of interest (PSD, coherence, AEC...) independently for each epoch, and then average (or compute other statistics) across epochs.
You would avoid introducing discontinuities in the signals without losing accuracy.