Noise Covariance help

I’m computing noise covariance at the moment, and I’m wondering if anyone might be able to clarify a couple small things:

  1. I want my baseline to include only “noise” , so if my stimulus onset is at 0ms, I’d imagine I would want a Baseline of say: -100-0ms ?

  2. I’m having a bit of difficulty fully wrapping my mind around what exactly removing the DC offset actually does…so really any clarification around this would be helpful. I’ve read that it “removes the average from each file: file by file, or globally” , but I’m not clear which option is the best bet for my data besides just leaving it at default (block by block).

  3. I have plenty of data, and plenty of time points, so based on the tutorials I imagine the full noise covariance matrix is the best way to go? Moreover, what is the distinction between these two processes.

Sorry in advance for the nitty gritty detail oriented questions…one day I’ll have to present my thesis data and I’d love to know as much as possible about what I’m doing and what I’m not doing :slight_smile:

Cheers!

Hello,

  1. The length of what you want to consider as a baseline only depends on your experiment. If you have very short inter-stimulus intervals, you may want to shorten it. If you always have a long period between two stimuli, you can probably increase it. The goal is to avoid having any of the brain processes of interest occurring during this noise/baseline perdiod.

  2. The covariance of signals s1 and s2 is: sum((s1-mean(s1)) .* (s2-mean(s2))) ./ (Ntime-1)
    This subtraction of the average of the signal (mean(s)) is what is referred to as “Remove DC offset”, it is the same operation performed when importing the trials in the database.
    If you calculate the noise covariance from multiple trials, you can process all the trials independently and subtract an average calculated locally for each trial. Because the values of the MEG sensors can be drifting significantly over time, this is supposed to be more accurate. I don’t see any reason for which you would chose a different option than “block by block”, the other option is here just for compatibility with other programs.
    If you have already removed this DC offset from your individual trials on the same baseline, or if you have applied a high-pass filter to your recordings before importing them, this option is not going to change anything.

Note that if you are processing MEG recordings, we recommend that you calculate the noise covariance matrix from empty room recordings (eg. 2 min of recordings before you bring the subject inside the MEG shielded room):
http://neuroimage.usc.edu/brainstorm/Tutorials/TutNoiseCov#Discussion

  1. If there are not enough time points, it is not possible to estimate the covariance correctly and it can ultimately lead to some instabilities in the calculation of the inverse model. In this case we just keep the most indispensable information of the noise covariance matrix: the variance of the sensors, ie. the diagonal of the matrix).
    If you have enough data, there is no reason for which you would consider a truncated noise covariance matrix.

Cheers,
Francois

Thank you! :slight_smile: I’m doing an N400 EEG study!