Dear All,

I try to use sLORETA on averaged (22) ecog spikes with an ecog BEM imported from openmeeg.

- I need to calculate the signal to noise ratio (SNR), both for the regularization parameter estimation, but also to express my EEG data. Since I am measuring the signal at one time datapoint (say peak of spike), the signal can be calculated from the column vector of that data point from DataMat.F. For noise estimation, I used 125 time points period from prespike interval. I calculated SNR as power ratio between signal and noise, and as traces of the noise covariance matrices over signal(one timepoint) and noise (125 time points, per Sylvain’s advice). As such, estimates are close, ~ 50 and 70, using the two methods, respectively. I would like to link the calculated SNR to the SNR implemented in BST and MNE, defined in MNE as:

“… SNR is the (power) signal-to-noise ratio of the whitened data.

Note

The definition of the signal to noise-ratio/ relationship given above works nicely for the whitened forward solution. In the un-whitened case scaling with the trace ratio does not make sense, since the diagonal elements summed have, in general, different units of measure. For example, the MEG data are expressed in T or T/m whereas the unit of EEG is Volts.”

Since my SNR was calculated on the raw unwhitened data, I want to see how to generate a DataMat.F for the whitened measured data, where I use the save covariance matrix on 125 baseline time points.

I could run a code in debug, please indicate where the whitened data is generated in BST.

- Related, I need to generate a measure of goodness of fit for the sLORETA sources, and since I am using patient data, I can calculate the residual variance after generating a simulated EEG trace in BST (source x gain). The issue is that the generated/calculated EEG traces using sLORETA solutions are several orders of magnitude higher in amplitude (range -7000 to +7000 microV), or are displayed as no unit scales (on y axis, ?), apparently randomly? Whereas my raw data is displayed on -500 +500 microV. I tried a range of SNRs since lambda affects calculated EEG amplitude, from 0.07 to 700; strangely, except for SNR 0.07 (which is not close to the right SNR), and including default 3, all sLORETA derived EEG traces are in the thousands of microvolts, which is not right. Using WMNE at SNR 70 gives amplitudes close to the raw data. I also tried different noise covariance regularization factors, from 0.001 to 1, these to not seem to affect data as much in my case. How should I get calculated EEG traces/Datamat.F to approximate the amplitudes of raw data?

Thank you, excuse verbosity,

Octavian.