Snr

Dear All,

I try to use sLORETA on averaged (22) ecog spikes with an ecog BEM imported from openmeeg.

  1. I need to calculate the signal to noise ratio (SNR), both for the regularization parameter estimation, but also to express my EEG data. Since I am measuring the signal at one time datapoint (say peak of spike), the signal can be calculated from the column vector of that data point from DataMat.F. For noise estimation, I used 125 time points period from prespike interval. I calculated SNR as power ratio between signal and noise, and as traces of the noise covariance matrices over signal(one timepoint) and noise (125 time points, per Sylvain’s advice). As such, estimates are close, ~ 50 and 70, using the two methods, respectively. I would like to link the calculated SNR to the SNR implemented in BST and MNE, defined in MNE as:

“… SNR is the (power) signal-to-noise ratio of the whitened data.
Note
The definition of the signal to noise-ratio/ relationship given above works nicely for the whitened forward solution. In the un-whitened case scaling with the trace ratio does not make sense, since the diagonal elements summed have, in general, different units of measure. For example, the MEG data are expressed in T or T/m whereas the unit of EEG is Volts.”

Since my SNR was calculated on the raw unwhitened data, I want to see how to generate a DataMat.F for the whitened measured data, where I use the save covariance matrix on 125 baseline time points.
I could run a code in debug, please indicate where the whitened data is generated in BST.

  1. Related, I need to generate a measure of goodness of fit for the sLORETA sources, and since I am using patient data, I can calculate the residual variance after generating a simulated EEG trace in BST (source x gain). The issue is that the generated/calculated EEG traces using sLORETA solutions are several orders of magnitude higher in amplitude (range -7000 to +7000 microV), or are displayed as no unit scales (on y axis, ?), apparently randomly? Whereas my raw data is displayed on -500 +500 microV. I tried a range of SNRs since lambda affects calculated EEG amplitude, from 0.07 to 700; strangely, except for SNR 0.07 (which is not close to the right SNR), and including default 3, all sLORETA derived EEG traces are in the thousands of microvolts, which is not right. Using WMNE at SNR 70 gives amplitudes close to the raw data. I also tried different noise covariance regularization factors, from 0.001 to 1, these to not seem to affect data as much in my case. How should I get calculated EEG traces/Datamat.F to approximate the amplitudes of raw data?

Thank you, excuse verbosity,

Octavian.

Related to 2 above. I assume the uniform way BST calculates the simulated EEG from the sources produces traces with different scales/orders of magnitude between WMNE and sLORETA.
In any measure, calculating explained variance (1- residual variance) between the measured and simulated traces uing wmne makes sense, as the traces are of the same magnitude. If I chose no noise (identity matrix), the explained variance in my particular scenario reaches 95%, depending on various SNRs/regularization used; this is OK and expected. While there can be an infinity of source distributions with the same topography, and this high explained variance does not guarantee a good solution with MNE, a low explained variance, however, would indicate a bad solution. This seems to be the case if I use wmne with a full or diagonal noise covariance calculated on prespike baseline- the highest explained variance caculated at the same timepoints (half midupswing, or peak of spike) is ~50%, after selecting a range of SNRs. This suggests that the wmne solution calculated with noise should be matched for goodness of fit not to the original measured data, but to the whitened measured data. I think this is not calculated per se with BST. How I could generate the whitened measured data (I need the C exp(-1/2) whitener to multiply it with datamat.F). Can I use part of the bst_wmne code for this, please advise,

Octavian.

Dear Octavian,

I would like to link the calculated SNR to the SNR implemented in BST and MNE.

Brainstorm and MNE do not calculate the SNR of the signal, they use a default fixed ratio (SNR=3), that has shown empirically to give stable results in many cases.
You are free to redefine this value for your specific case.

please indicate where the whitened data is generated in BST

The whitened data is never explicitly calculated. As you can see in function bst_wmne.m, the whitening operator is a linear operator (matrix W) that is multiplied to the right of the inverse operator (matrix Kernel).
Hence, the recordings are whitened only when the sources are calculated (ImageGridAmp = Kernel * W * DataMat.F)
The whitening operator is saved in the results files in the .Whitener field.

Cheers,
Francois

Dear Francois,

  1. Thank you for the answer. In my particular application, I need to derive a measure of good fit, such as explained (1-residual variance), and this depends on the choice of regularization/lambda used. Since the simulated data in BST represents whitened regularized predicted data, I think measuring the goodness of fit to whitened measured data instead of raw measured data would make more sense. That is the reason I wonder how to derive the former. If this is not correct, please let me know.

  2. Regardless, even if I use the raw measured data as comparison, wMNE solution- derived simulated EEG can be compared, giving residual variances of 0.08-0.4 depending on the regularization/noise modeling used. However, sLORETA simulated EEG is of a different scale compared with the raw data, and RV calculated do not make any sense. Is there a workaround for this?

Thank you,

Octavian.

Hi Octavian,

I forward the response from Rey Ramirez, who wrote those wMNE/dSPM/sLORETA functions in Brainstorm:
“Goodness-of-fit is always computed with the unwhitened data. SNR can be computed based on the whitened data. dSPM and sLORETA “estimates” will not explain the data in the same way as MNE. Actually, both of them are sort of MNE solutions that have been improved by doing a “depth bias compensation” after the MNE is computed. And it is this adjustment (think of it as additional regularization) that makes them not be feasible solutions, as opposed to MNE and MCE and SBL which are feasible solutions.”

“Both dSPM and sLORETA are not feasible inverse current solutions like for example MNE or MCE. Think of them more as inverse estimates of more abstract activity or as statistical maps. The same applies to beamformer maps of activity. Because of this you cannot simply multiply them with the leadfield matrix to get a forward field. So for modelling forward solutions, the MNE estimates are better. For dSPM and sLORETA, you could scale the simulated signals (i.e., compute the scalar number that optimally makes the “forward” vector norm match the norm of the measurement vector). But that will not produce a very good goodness-of-fit anyway, and is not supposed to.”

Therefore, I removed the menu “Model evaluation” for dSPM and sLORETA source maps.

Cheers,
Francois