Visualization bug: flat channel + DC + high pass

I already showed this to @Raymundo.Cassani today. I had perfectly flat EEG channels in a certain time window, and if I apply the visualization options DC and high-pass filter, the channels disappear. It's not a serious issue for me, but I thought it best to report it since it's still an unexpected behavior.

Another thing that surprised me was that when applying DC only on this "flat" section, the DC offset is not removed perfectly as I can still make the channels move by zooming in enough. I would expect this setting to remove the offset of only the displayed part of the data, right?

Cheers,
Marc

Can you please share an example file?
(short if possible, and indicating the time window where to reproduce this behavior)

This example starts flat for about 8 seconds, then there's changes.
https://box.bic.mni.mcgill.ca/s/Lldeg5alb4wpwdP

I had perfectly flat EEG channels in a certain time window, and if I apply the visualization options DC and high-pass filter, the channels disappear. It's not a serious issue for me, but I thought it best to report it since it's still an unexpected behavior.

This is expected, to make it possible to distinguish between strictly 0 (hidden) and somewhere close to zero (visible):

Would you think that seeing only the lighter gray line when it's strictly zero a bad solution?
It's very easy to disable this behavior...

Another thing that surprised me was that when applying DC only on this "flat" section, the DC offset is not removed perfectly as I can still make the channels move by zooming in enough. I would expect this setting to remove the offset of only the displayed part of the data, right?

This is indeed quite unexpected. It seems to be due to numerical approximations in the computation of the mean or the subtraction.
After computing the mean (~0.126) and removing it from the F matrix, there is some residual left (2.3315e-15), which is what we see then in the figure as non-zero values...

It would be possible to test for a strict equality of all the values in the F matrix, and remove the value of the first time sample for the constant channels instead of the output of the mean function, but this would make the reading function significantly heavier only for handling some very marginal use cases...

I'd prefer documenting this somewhere rather than slowing down some critical bottlenecks.
What do you think?

I'm not sure I see the advantage of distinguishing zero from almost zero for users, but I'm sure there was a reason. Made me think it was a bug though. But since it's not then it's fine.

I don't think we need to modify the DC offset computation either. I guess I didn't realize how fast the zoom works. It didn't feel like I was getting to 1e-15 in just a few scrolls. But that's fine then. This is not "normal" data. I just wanted to make sure it wasn't something more serious.

Thanks François