Baseline normalization for source localization

Hello,

We are currently working on a project in which we apply source-localization consensus mapping to simple spike events. In one sleep stage, we identified 47 spike events and aligned them in time such that the spike peak is located at zero seconds.

However, in the consensus-mapping EEG results, we observe an offset in the EEG signal that needs to be removed. For an additional sensor-level analysis, we already developed a baseline-detection algorithm to identify a stable baseline in the time window from −2 to −0.5 seconds before the spike. The algorithm uses a 500-ms sliding window with a 10-ms step size and selects the segment with the lowest variance. Because the sensor-level analysis uses only the Fp2 channel, the baseline window is currently determined based solely on that channel.

For the source-localization analysis, we intend to use this baseline window in the “baseline normalization – DC offset removal” option. However, applying a baseline derived from Fp2 to all channels may not be optimal, since it does not necessarily correspond to the lowest-variance window across all channels.

Therefore, I would like to ask whether this is the most appropriate method for offset correction in this context, or whether alternative approaches are recommended for baseline removal at the source-localization stage.

I have attached figures showing (1) the consensus EEG data in which the residual offset is visible, and (2) MATLAB plots demonstrating the baseline detected in Fp2 and applied across channels (most channels show good alignment, but a few exhibit large activity/variance in the selected baseline window).

Thanks!

Indeed, defining a baseline period for all the sensors based only in one sensor does not seem optimal. Checking for the sum of variances across channels.

Regarding the selection of the baseline. What is the reason of choosing a specific baseline for each spike?
Would not this approach difficult the across-spike analyses as they will have different baselines?

Usually, DC correction happens in the sensor level before computing sources.
However, if the sources are already computed, and there is a linear relationship between the sensor data and the sources, DC-correcting the sources using the same baseline window is the same that DC-correcting the sensor data and computing the sources


@yashvakilna, any comments on this thread?

Ah sorry, perhaps I explained it wrong, but DC correction in the sensor level is exactly what I am trying to do. So we are taking a few steps back and first correct the DC –> correct the time offset –> compute the sources for the single spikes (with 1 baseline of the wake run for all spike events) –> consensus mapping.

The question is whether within the DC correction at sensor level, it is possible to take a different baseline window for all the channels or that we have to take 1 baseline window for all the channels within that spike? Because I only have the window of -2s until -0.5 seconds before the spike annotation to find a 500 ms baseline.

I hope this makes my question more clear!

The goal is to remove the offset present at the time of the spike, so looking for a time window of low variance may still include a large offset that’s different from your spike time, if it’s not close enough. Whatever is happening at the time of the spike is what you want to deal with, so I’d stick with the time nearest to the onset of the signal of interest.

If your spikes are short enough duration and high frequency content (spiky), you may benefit instead from a high-pass filter to get rid of the really slow large amplitude changes in your channels. You could then still select a time window close to your spike for baseline removal, or if your high-pass is high enough it might not be necessary anymore.

As for the follow-up question of what frequency to use for the high-pass, I’d say this is subjective and depends on your data. You want to balance removing as much uninteresting fluctuations as possible while keeping the spike mostly unaltered.

I haven’t dealt with single event mapping however. I think there’s no perfect solution here, it will always be noisy. If the spikes are very consistent in shape, you could also average them before localizing, which would reduce the need for cleaning: you could possibly get away with not filtering, and just selecting a baseline window.