Bad segments in resting-state data

Hi everyone

I'm working on resting-state MEG data. If I understand correctly, labeling bad segments in this kind of data doesn't affect any subsequent analysis, even the data covariance matrix.
I'm going to extract scout time series and compute connectivity between them. I plan to use extended events of the bad segments that I selected based on the sensors data and write code to remove these time periods from the extracted scout time series before computing connectivity. I wanted to know if that makes sense. Or is there any other solution to deal with bad segments?

Thanks in advance.

I'm working on resting-state MEG data. If I understand correctly, labeling bad segments in this kind of data doesn't affect any subsequent analysis, even the data covariance matrix.

It depends what: the PSD calculation excludes the bad segments, the noise covariance computation and the SSP as well.
I'm not sure how you would use a data covariance for processing resting state data.

I'm going to extract scout time series and compute connectivity between them. I plan to use extended events of the bad segments that I selected based on the sensors data and write code to remove these time periods from the extracted scout time series before computing connectivity. I wanted to know if that makes sense. Or is there any other solution to deal with bad segments?

You can import all your recordings by blocks of 1s (use the option "Split" in the import options). The segments of 1s including part of a bad segment would be tagged as bad, so if you select all the imported trials in Process1 and run the process "Standardize > Concatenate time", it would produce a new file with all the blocks minus the bad ones.
The approach you describe (create "good" segments, import them and concatenate them) also works, but it is typically less practical to select interactively long segments than short bad segments.

If you need this re-concatenated file to be handled by Brainstorm as a continuous file, right-click on it > Review as raw.

Note that this is not always a recommended procedure: cutting out bad segments introduce discontinuities in the signals, which can have unpredictable results on the sensitive connectivity measures.

I need the data covariance matrix to compute sources using beamformers.

Thanks for your suggestions. About the effect of discontinuities in the signal on the connectivity results, Is it possible for you to mention some references for further reading?

Thanks again.

I need the data covariance matrix to compute sources using beamformers.

@John_Mosher @juangpc Would you recommend the use of beamformers for resting state data?

Is it possible for you to mention some references for further reading?

@Sylvain @hossein27en ?

Hi @Pedram,

I see no problem using beamformers for resting-state recordings.

An you would use all the recordings to compute both the noise and the data covariance?

I would not use the same data to define both the data and noise covariances.
But this matter is a tricky one. I would test my hypothesis with empty room recordings to define noise covariance matrix, and then the actual recording to define the data covariance.
or try to replicate this method (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3382730/) which is a well-known and cited method. Before trying fancier things. In that paper, there is even some discussion on the problems with the snr of the output time-series.

Thank you for elaborating on that.
I use the empty room recordings to estimate the noise covariance and the actual data for the data covariance matrix. However I want to know if you exclude the bad segments before computing the data covariance matrix and also how do you manage the bad segments prior to connectivity analysis.

exclude bad segments before

yes. In this universe, there is only one thing more important than data, and that is clean data.

1 Like