Keeping or removing bad segments in a short-time Fourier transform process (and subsequent visualization)?

Hi,

I figured I would make a discussion post here to include as many voices in the matter as possible.

I was looking through the various methods currently in Brainstorm for computing TF maps and noticed that most do not remove bad segments before processing the data. This makes some sense for kernel-based methods such as Hilbert and Wavelet transforms, but for a windowing process like the STFT it is very easy to identify times containing bad segments (and remove them if desired). If a STFT-based process were to be distributed in Brainstorm, what are your thoughts on keeping or removing timepoints containing bad segments?

If we decide to remove bad timepoints, Brainstorm's TF visualization platform currently generates the wrong set of times for the data (uses linspace for the full time range with fewer points), so I will just have to include a clause that provides the correct times (either leaving blank space for missing times or appearing as a linear interpolation across the bad times).

What are your thoughts @Marc.Lalancette @Francois @Sylvain @Raymundo.Cassani ?

Hi Luc,

The PSD computation ignores both the bad files/trials (as they are not considered by the Process1 tab/pipeline manager), and the bad segments (any segment of recordings overlapping with an event including the tag bad):

This impacts the number of windows that averaged, but not the format of the output file.
However, we can't use the same procedure for methods that create files with a time dimension, because it would result in missing data in the output file.
Brainstorm is not equipped to handle files with time discontinuities. Just modifying the Time vector and removing would not do any good, as most functions consider that these values are strictly linearly distributed. It could make sense to exclude bad segments from a time-resolved method, but you can't do it simply removing the corresponding samples.

I think you could either:

  • Keep the bad segments but report the bad events in the output structure. This would require some substantial interface work, as the event display is not supported by the time-frequency figures, and the timefreq files do not even include an 'Events' field...
  • Set all the values that can't be estimated to zero (not NaN, as these are not handled correctly in the normalization or display functions).

To make it clear that these new functions work differently from the others, you could add an explicit input parameter to decide what to do with bad segments (like in the PAC process "Exclude bad channels and bad segments").

The decision was taken a while ago to keep the bad segments in time-frequency analysis, to avoid any confusion. The logic was that the users have the opportunity to remove the bad epochs before. Bad segments always lead to the imported data segment to be marked as bad; if they nonetheless want to use these bad epochs, they have to explicitly mark them manually as good, so there is no risk of using bad data unintentionally
This could be discussed again at a weekly meeting if needed.

1 Like