Tutorial 11: Bad channels

Hello BST community,

I wish to ask some questions based on Tutorial 11: Bad channels.

  1. What actually happens after marking some channels as bad?
    According to the below statement when some channels are marked as bad, the remaining sensors around the missing sensors are re-interpolated. What does this actually mean and how does it work? Could you please point out some research articles that this re-interpolation is based on?

The selected channels disappear from the two views. In the time series figure, the signals are not visible anymore, in the topography the corresponding dots disappear and the values of the magnetic fields around the missing sensors get re-interpolated based on what is left.

  1. How would Brainstorm handle a situation when there is more than one bad sensor around a similar region? (e.g. P1, Pz and POz). I imagine re-interpolation would happen but if the number of missing sensors exceeds a certain level the re-interpolation would be inaccurate (because the re-interpolation still relies on the remaining sensors so if these become less and less available re-interpolation would become increasingly difficult).

  2. Based on the previous question, what would be your recommendation for an upper limit of the number of bad channels you can tolerate, after which the dataset is just not up to scratch and needs to be rejected overall? Might it be a different upper limit if you had 32 channels or 64 channels?

Many thanks for your time,
MinChul Park

When a channel #i is marked as bad, the corresponding entry in list of bad channels is set to -1 (ChannelFlag(i) = -1;). Then the figures are updated to reflect this modification.
https://neuroimage.usc.edu/brainstorm/Tutorials/Epoching#On_the_hard_drive

According to the below statement when some channels are marked as bad, the remaining sensors around the missing sensors are re-interpolated. What does this actually mean and how does it work?

The sensors are ignored and the topography projection are computed as if there was no missing channel.
In MEG: https://neuroimage.usc.edu/brainstorm/Tutorials/ExploreRecordings#Magnetic_interpolation
In EEG:

How would Brainstorm handle a situation when there is more than one bad sensor around a similar region?

The projection from sensors to 2D topo surface is performed regardless of the number of bad sensors. If you have a lot of missing sensors in a region, then the values displayed on the topography are interpolated from sensors that are far away, and it would be a lot less reliable. Brainstorm does not show any warning, you are responsible for the interpretation you make of a topography with lots of adjacent missing sensors.

Based on the previous question, what would be your recommendation for an upper limit of the number of bad channels you can tolerate, after which the dataset is just not up to scratch and needs to be rejected overall? Might it be a different upper limit if you had 32 channels or 64 channels?

It all depends on what you are expecting to do with the data.
If your objective is source estimation, then 64 channels is possibly already very low.
We are not EEG experts, I recommend you refer to the EEG literature for questions related to the EEG methodology.

Great - thanks for all these informative answers - always appreciated!