Workflows: When to filter? When generate subject average?

Hi everyone,

I have some questions about the workflow when there are bad segments due to head movements:

  1. Should filtering (e.g., notch and bandpass) be applied after cutting out the clean segments or before?
  2. At what point in the source reconstruction pipeline is it best to combine the data from the same subject?

Thanks a lot for any advice!

We have put together the page Workflows to address these and more questions.
https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows

Below the links to the specific sections of that page that address the posted questions:

Frequency filters should be applied on the continuous raw data, rather than on the imported epochs. This is to avoid that the transient of the filter uses a good part of the epoch.

https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Common_pre-processing_pipeline

Once source estimation is performed, the data is on the same space, this is to say the Subject anatomy (cortical surface if unconstrained sources are estimated). So it can be averaged without further pre-processing.

If there are multiple runs and the run average is computed for different number of trials (in the run), it is recommended to average the run averages using weighted average, this with the goal that each run has the same weight the in the Subject average.

https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Constrained_cortical_sources

Hi @Raymundo.Cassani

Thanks for your reply.

I have continuous resting-state data of 5 minutes. I noticed that the head distance is not stable over time, so I segment only the periods in which the head was stable, and then I process only those segments separately. I have followed the Brainstorm tutorial step by step for segmenting periods with a stable head. Now, if I apply a notch and band-pass filter on the continuous raw data before segmenting, it can happen that a transient coincides with the first stable segment, and I end up discarding a segment of data. I was wondering if this is correct.

For this first segment, you can just extract the data from 1 sample after the transient to the end of the segment

Thank you @Raymundo.Cassani

I ended up using LCMV beamformer to reconstruct sources for the different stable head segments of the same subject, with constrained sources normal to the cortex and median eigenvalue for regularization. What is the correct way to combine the source estimates from the different temporal segments of the same subject? Weighted average gives problems because the data covariance matrices have different dimensions.

What do you mean by combine?

Are channel being removed in the different time segments?

I have several raw files for the same subject because I split the continuous recording into segments of stable head position, using a 1.80 mm threshold as suggested in the Brainstorm tutorial. Each segment is then analyzed separately (same bad channels across segments), but the segments have different durations. After estimating the sources for each segment, I would like to merge/aggregate the source estimates to obtain a single representation per subject.