MEG Resting state data importing into database

Hi Team,
I was going through tutorials and few questions on forum which were asked few years ago. I would like to confirm if my understanding is right. Can you please help me with it ?

  • For MEG resting state data, I have band-passed the signal in 0.2 - 100Hz range. I believe, I do not have to do DC correction while importing into database.
    Source: https://neuroimage.usc.edu/brainstorm/Tutorials/Epoching#Import_in_database
  • Bad channel interpolation: A few channels are marked as bad and I will be focusing on analysing reconstructed source time series. Thus, I don't have consider bad channel interpolation and I believe, there is no such feature in brainstorm either ?
  • I am analysing resting state MEG data and have marked some time segments as bad. I haven't clearly understood - if I need to interpolate/perform some operation for effective concatenation of the trimmed boundaries after I remove the bad time segment. Can you help me here ?
  • Also, for computation of data covariance matrix, what would you advice on selecting the baseline time segment ?
    Appreciate all your patience and support!
    Thanks!

Hello,

As always, there is no generic answer to analysis questions without a better understanding on our end of your research questions.

  • But broadly speaking, it always recommended to remove the DC offset from raw MEG recordings, as these DC values are much larger than brain signals and high-pass filtering will cause use artifacts at the edges of the recording, which will spoil your analyses, especially if you epoch the data over shorter segments.
  • No need for channel interpolation if you focus on source mapping.
  • Brainstorm will take care of removing the BAD segments from further analyses.
  • If you're looking at resting state, you can indicate that the data covariance be derived from the entire recording. If you do not have empty-room recordings, the noise covariance can be set to Identity (no info on noise).
1 Like

Hi Sylvain,
I am a bit confused about the following Standardize-> Baseline Normalisation process:


The online tutorial at the bottom left redirects to https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Z-score.
So, does this standardise the sensor time series or the computed source time series ?
Also, in the tutorials, baseline normalisation is done before epoching in activity based recording. I am analysing resting state MEG data. Should I be doing the baseline normalisation as I do not intend to epoch my data and analyse on the source time series.

I had already performed high pass filtering and removed the artefacts. I do not intend to epoch the data and will be the trimming the starting's 10 sec and ending's 10 sec data for further analysis. In this case, I believe - I wouldn't have to the DC correction. Is it right ?

Great, so if you have already high-passed filtered this long recording, this will have taken care of the large DC offset at each sensor. You therefore do not need to apply further DC/baseline correction.