= Tutorial 6: Noise covariance matrix = The source reconstruction process requires an estimation of the noise level in the recordings. Ideally, we want to represent only the noise of the sensors, but we can also use a baseline of resting recordings. The main problem is in fact to identify segment of recordings that we can consider as "noise", or at least that do not contain any of the brain activity of interest. This tutorial shows how to compute the noise covariance matrix from the pre-stimulation baseline of the two averaged files we have in the database. <> == Compute from recordings == * Select protocol TutorialCTF, switch to the ''Functional data (sorted by subjects) ''view. * To estimate the noise covariance matrix, we need to use only the average of the MEG recordings, and not the standard deviations. Select the two ERF files at the same time (holding the ''Ctrl ''key, or ''Command'' on Macs), then right-click on one of them.<
><
> {{attachment:noiseCovMenu.gif}} * The "Noise covariance" menu includes: * '''Import from file''': Uses a matrix that was computed previously using the MNE software. * '''Import from Matlab''': Import from any [nChannels, nChannels] matrix in Matlab workspace. * '''Compute from recordings''': Use the selected recordings to estimate the noise covariance. * '''No noise modeling''': Do not compute the noise covariance matrix from noise recordings. Instead, create a file that contains an identity matrix. In the inverse modeling, this is equivalent to the assumption that the noise in the recordings is homoskedastic,and equivalent for all the sensors.This is the menu to use when you really don't have access to any noise information, or when you don't have any option to identify this noise (for instance if you are studying ongoing activity with EEG). * Select "Compute from recordings", the options window appears:<
><
> {{attachment:noiseCovOptions.gif}} * The program is going to take the pre-stimulus values (= ''baseline'') for all the files that have been selected, which are supposed to be only noise, concatenate them, and then compute the covariance matrix for this big F matrix: <
>!NoiseCov = (F - mean(F)) * (F - mean(F))' * The top part of this window shows a summary of the files that have been selected: 2 files at 1250Hz. Total: 126 time samples. * In the bottom part, you can define some options: * '''Baseline''': You can specify what you consider as the pre-stimulus time window in your recordings. By default it takes all the time instants before 0ms, but you might need to redefine this depending on your experiment. * '''Remove DC Offset''': Define how to remove the average from each file: file by file, or globally. * '''Output''': Compute either a full noise covariance, or just a diagonal matrix (only the variances of the channels). It's better to use a full noise covariance matrix, but only when you have a lot of time points to estimate it properly, which is not the case here. Just always keep the default selection unless you know exactly what you are doing, Brainstorm detects what the preferred option is for every case. * Click on OK. Observe that two new files appeared in the database, one for each condition. Each of them contains the same [nChannels x nChannels] noise covariance matrix. Right-click on one of them:<
><
> {{attachment:noiseCovFiles.gif}} * '''Display as image''': Shows the noise covariance matrix as an indexed image. This can be useful to quickly check the quality of the recordings. The noisier channels are indicated by red points.<
><
> {{attachment:noiseCov.gif}} * '''Apply to all conditions''': Copy this noise covariance matrix to all the other conditions from the same subject. * '''Apply to all subjects''': Copy this noise covariance matrix to all the conditions of all the subjects in the database. * You can also copy a noise covariance file to another condition/subject just like any other file: using the popup menus File > Copy/Paste, or the keyboard shortcuts Ctrl+C and Ctrl+V. * Note: The more time samples you have for the estimation of the noise covariance, the more accurate the source estimation. If you have many files available for the same subject / same run, always use all of them. Select all the files at the same time (or the subject, when you don't have different types of files like our "Std" files here), and start the computation for all of them ''at the same time''. If you run the same process several times for the same subjects/conditions on different files, the noise covariance file is just overwritten at each time. == Noise covariance from another dataset == In order to get a good estimation of the noise, we need much more time samples than what we have in an averaged file. Also, the pre-stimulation baseline might not be a very good estimation of the noise of the sensors. You have several options available to get better results, by using different segments of recordings than the ones that you are analyzing. The only constraint: you need to apply the '''exact same pre-processing operations''' to the recordings you use for the estimation of the noise covariance, and to the recordings for which you are reconstructing the sources: frequency filters, resampling, re-referencing... * '''Use single trials''': If you have the single trials from which the average file was computed, * Create another condition in your subject and import the single trials in it, * Estimate the noise covariance on these single trials only (right-click on the new condition node > Noise covariance > Compute...). Use only the pre-stimulus baseline. * Right-click on the new noise covariance file > Noise covariance > Apply to all conditions. This copies the noise covariance file in the conditions of your averaged files. Alternatively, use the Copy/Paste menus or shortcuts. * '''Use a real baseline''': If you have recorded some resting periods during the experiment, or some empty room measurements, you can use them exactly as in the previous case. * Import the blocks of raw recordings in the database in a new condition * Compute the noise covariance using the entire time window (it is supposed to be noise everywhere) * Copy to other conditions * '''Use the original continuous file''': Same thing as previously, but skipping the import part. You can compute directly the noise covariance from a RAW file: * Right-click on your subject node > Review RAW file. (the management of continuous files in native format is detailed in an advanced tutorial: [[Tutorials/TutRawViewer|Review raw recordings and edit markers]]) * Right-click on Link to raw file > Noise covariance > Compute from recordings. Then select the time window to use as a baseline (subject resting or empty room). * Copy to other conditions. * '''Import from another software''': If you have pre-processed your recordings with the MNE software, you might already have a nice noise covariance matrix available in a ''-cov.fif'' file. Just import it in the average conditions (right-click on condition > Noise covariance > Import from file) '''Note for averaged files''': If you import or compute a noise covariance matrix based on a set of RAW recordings (not averaged), and then use it to estimate sources for averaged recordings, you may have to set manually the number of trials that were used to compute the average. * If you calculate the averages in Brainstorm, you don't have to worry about it, it is done automatically. It is only in the case you have imported in Brainstorm files that were averaged in another software, because the number of averaged trials was propably not saved in the file. * To check if this information is stored in the database: right click on the averaged file for which you want to estimate the sources. Menu ''File > View file contents''. If you see "nAvg = 1", the noise covariance matrix will not be used correctly on that file. To fix this problem: * Right-click on the averaged file > File > Export to Matlab > "data" * In the Matlab command window, type: "data.nAvg = nTrials;" (replace nTrials with the actual number of trials from which the average was computed) * Right-click on the averaged file > File > Import from Matlab > "data" == Discussion == This matrix is very easy to calculate, and the Brainstorm interface offers a lot of flexibilty to use the files and time windows you want to process. The main problem about this noise covariance matrix is the difficulty to estimate what "noise" means. In your experiment, you want to use segments of recordings that contain only the noise of the sensors if possible, or segments of recordings that do not contain any of the brain signals of interest. ==== MEG ==== The MEG case is usually easier because we can have access to recordings that are real noise measurements, the MEG room just has to be empty. Record a few minutes of recordings right before bringing the subject in the MEG, or after the experiment is done. If you acquire several runs successively, or even several subjects, you can assume that the state of the sensors didn't change much. Therefore, you can re-use the same noise covariance matrix for several runs and subjects. ==== EEG ==== The EEG case is typically much more complicated. The noise level of the electrodes recordings depends primarily on the quality of the connection with the skin, which varies a lot from a subject to another, or even during the acquisition of one single subject. The conductive gel or solution used on the electrodes tends to dry, and the electrode cap can move. Therefore, it is very important to use different channel files (hence different noise covariance matrices) for each subject, and possibly to split long recordings in different runs, with different noise covariance matrices too. ==== Evoked responses ==== In the case of evoked responses (aka event-related) studies, it can be a valid approach to use the pre-stimulation baseline to estimate the noise covariance. But keep in mind that in this case, everything in your pre-stimulation baseline is going to be attenuated in the source reconstruction, noise and brain activity. Therefore, your stimuli have to be distant enough in time so that the response to a stimulus is not recorded in the "baseline" of the following one. For repetitive stimuli, randomized delays between stimuli can help avoiding expectation effects in the baseline. ==== Resting state ==== ==== Epilepsy ==== == Next == Next tutorial: [[Tutorials/TutSourceEstimation|source estimation]].