Several issues when trying to compute maximal source with sLORETA on EEG:
How can I find the coordinates in mm for the maximal source vertex indicated in ImageGridAmp; I think the cortex .mat file has vertex coordinates in scs meters;
after importing the sMri in Matlab, if I use P_mri = cs_scs2mri(sMri, P_scs’ .* 1000), it states P_scs not defined…
I am interested in a numerical solution, because clicking on the cortex does not highlight but adjacent vertices.
If using sLORETA constrained, the vertex no. and source amplitude of the maximal source (modular value, independent of sign+ or -) as imaged on the individual patient’s cortex and as taken from ImageGridAmp correspond frame to frame, as supposed. This seems not to be the case if using the unconstrained version, both maximal source amplitude, and vertex no (=ImageGridAmp row no/3, +/-1) do not correspond to the maximal source seen on cortex, with the colorbar source threshold increased to allow a pointsource to be visible.
I verified several time frames, also I made sure the timeframes between cortex display and table column correspond.
For EEG (which is sensitive to both radial and tangential sources, weighed towards the former), should I use the unconstrained or the loose version; as there is no ground truth, which one would you recommend?
Coordinates of the maximum (for a contrained source reconstruction):
Right-click on the source file > File > Export to Matlab > “sResults”
Right-click on the cortex file > File > Export to Matlab > “sCortex”
Right-click on the MRI file > File > Export to Matlab > "sMRI"
t = 1;
[~,iMax] = max(sResults.ImageGridAmp(:,t))
p_scs = sCortex.Vertices(iMax,:); % Position in SCS of the vertex with maximum value at time=t
p_mri = cs_scs2mri(sMri, p_scs’ .* 1000)’; % Position in MRI coordinates (millimeters), divide by sMri.Voxsize to get the position in voxels
In unconstrained models (free or loose orientations), we keep on source value per orientation, the field ImageGridAmp has [3*nVertices] columns. It is organized this way: [source1x, source1y, source1z, source2x, source2y…]. What is displayed in the 3D figures is the norm of the values for the 3 orientations.
Unconstrained models (free or loose) give more freedom to the model, and should be used when the results for the constrained models are definitely not catching the effect you want to study. They are better indicated when using a standard anatomy, or when studying deeper structure, as the orientation constrain is then invalid. Problem is that those results are then much more complicated to represent and analyze. You have three values at each point of the surface instead of one, and not all the features of Brainstorm are compatible with those source models yet. We will put some efforts in making those unconstrained models more accessible in the next few months.
Thank you for your prompt and focused assist. I am clear now on how to get the coordinates of maximal sources, and find the corresponding vertices in Freesurfer native surface files. I am going to post on this on a separate thread.
Issues remaining:
Since the solution norm amplitude is always positive (sqrt (x^2+Y^2+z^2), where x, y, z are included for each vertex/time point in the ImgGridAmp file as you noted, I assume the orientation of the norm vector depends on the signs of x,y,z in the CTF coordinate system Brainstorm uses. If this is true, is there a way to visualize the norm orientation on the cortex?
I am not clear why solution units are pA.m (picoAmpereXmeter), what does this represent?
Many papers present the F-distribution source probablities specifically for sLORETA instead of absolute source amplitudes, is there a way to derive this from Brainstorm data?
see Plummer C, Wagner M, Fuchs M, Vogrin S, Litewka L, Farish S, Bailey C, Harvey
AS, Cook MJ. Clinical utility of distributed source modelling of interictal scalp
EEG in focal epilepsy. Clin Neurophysiol. 2010 Oct;121(10):1726-39.
Can we use data from noise covariance calculation for SNR calculations, and how?
The depth weighing option should be ignored with sLORETA and used only with WMN since the former claims 0 dipole loc error, is this true?
Here are a few elements in response to your questions:
Since the solution norm amplitude is always positive (sqrt (x^2+Y^2+z^2), where x, y, z are included for each vertex/time point in the ImgGridAmp file as you noted, I assume the orientation of the norm vector depends on the signs of x,y,z in the CTF coordinate system Brainstorm uses. If this is true, is there a way to visualize the norm orientation on the cortex?
Please note that the actual source currents are signed. It is only the display that can be shown in absolute values of the current amplitudes. It indeed often preferable for visualization are source orientation interacts with the sign of the current flow. You can switch between the two representations with a right click over the figure window > Absolute Values. Default source orientation is the outwards surface normal at each vertex. It is saved in the Results file under SourceOrientation. You can view them separately in Matlab using the quiver3 command once you created a surface patch from the Vertices and Faces of the cortical tessellation. This is all pretty advanced stuff: Let us know if this is exactly what you are looking for before we can help further
I am not clear why solution units are pA.m (picoAmpereXmeter), what does this represent?
MEG and EEG source models are current dipoles i.e.: a current flowing (A) over a short distance (m), hence A.m or pA.m for units better adapted to physiology.
Many papers present the F-distribution source probablities specifically for sLORETA instead of absolute source amplitudes, is there a way to derive this from Brainstorm data?
You can do that in Brainstorm and use the sLORETA source model provided. You may also standardize minimum-norm source models using a z-transform (available in the process tab) with respect to a baseline for instance. The resulting metric is indeed a F stat.
Can we use data from noise covariance calculation for SNR calculations, and how?
The trace of the noise covariance matrix will provide an estimate of the noise level. You can compute another covariance matrix over segments of signals where you have data=signal+noise and compute its trace. You will then compute the following ratio: data/noise=(signal+noise)/noise = 1+SNR. Does that make sense?
The depth weighing option should be ignored with sLORETA and used only with WMN since the former claims 0 dipole loc error, is this true?
0 dipole loc error is a claim we haven’t verified ourselves yet. It is true when the actual source is a single dipole, which is never the case in real data.
Depth weighting and sLORETA: Indeed, you can observe that by default the depth weighting option is not selected for sLORETA, while it is for dSPM and wMNE (to see those options, click on the “Expert mode” button in the source modeling window).
How do I use the data (imagegridamp, data and noise covariance matrices) to express sLORETA results as F distribution values instead of pA.m in BST (corect for noise varinace at each electrode) ? Can I use process tab for this?
I generated a noise covariance matrix on an average spike imported from cartool. I know it was the result of averaging 20 individual spikes, but I do not have the original spikes available. Should I adjust the noise covariance matrix C by L=20? If yes, how can I do it in BST?
The values that you see, or that you read from the ImageGridAmp field in the file [I]are[/I] the sLORETA results. They are never displayed as pAm.
If you calculate the noise covariance from the same data for which you’re doing the source reconstruction, you don’t have to change anything. Brainstorm doesn’t know it is an average, and it’s all fine this way. You don’t have to do anything.
Note: I’m not sure that computing a noise covariance matrix from the time series of an averaged spike is really a good thing… If you cannot get a better estimation of the noise, maybe you should consider using an identity matrix (no information about the sensors noise).
Estimating noise in EEG is an issue, as we have brain noise and measurement noise, the latter which I am not sure conforms to the assumption of gaussian distribution and noncorrelation.
As to the issue of noise covariance estimation of averaged EEG spikes or ERPs, it looks like the noise covariance matrix calculated on a group of concatenated individual spikes and applied on an average spike, and the one calculated directly on the averaged spike are not identical. This is why it is important to know if/how to adjust the latter. From the MNE website…"In the MNE software the noise-covariance matrix is stored as the one applying to raw data. To reflect the decrease of noise due to averaging, this matrix,C0 , is scaled by the number of averages, L, i.e., C=C0/L."
Outside of the EEG issue, please clarify if any asjustments have to be made to the noise covariance matrix (say if worked on a MEG average spike)
As to the SNR calculation, I post the following from earlier from this thread by Sylvain to set the context of my next question:“The trace of the noise covariance matrix will provide an estimate of the noise level. You can compute another covariance matrix over segments of signals where you have data=signal+noise and compute its trace. You will then compute the following ratio: data/noise=(signal+noise)/noise = 1+SNR.”
If I have n electrodes, and a nxt data matrix (t=time), the noise covariance matrix will be nxn. If I want to calculate SNR at a time tj (corresponding say to the peak of spike), the data will be a column vector nx1 at tj.
What is the trace of this column vector/how do I calculate it?
When computing the inverse solution, the default “SNR” (power signal-to-noise ratio of the whitened data) is 3. I am not sure how this SNR is related and affects the SNR I want to calculate. Please clarify.
Also, related to 1), chosing the noise cov matrix as identity vs calculated from data would have a major impact on the SNR calculation as its trace is the denominator. That is another reason to make sure I make the right choise on 1.
Thank you,
Thank you for all these great questions, Octavian:
Re: 1 - I think the weighting with the actual number of samples (trials) is taken care of in Brainstorm but Francois will confirm.
Re: 2 - This is an approximation of how you can estimate SNR with EEG, which has conceptual issues because instrumental noise is not easily captured, unless one considers that spontaneous brain activity is ‘brain noise’, which is hard to defend.
To be pragmatic, the source models in Brainstorm are not over sensitive to the covariance model of the noise and the SNR settings, within reasonable range. More specifically, I don’t anticipate that you can expect a major difference in source images at the peak of a spike by optimizing the covariance of the noise stats. Using the covariance of the ongoing signal, computed from data segments without too many visible spikes in the traces is good enough from our experience. You can convince yourself on your data by computing source models using identity vs data-estimated noise covariance models
The trace of a nx1 column vector would be the sum of squares of its entries. The SNR value of 3 could indeed be adjusted, although we never do it in practice, again because the source estimators we propose are very robust to changes in their set parameters: we claim this is important for the good reason that estimating SNR is not trivial in electrophysiology, as this discussion demonstrates. Qualitatively, augmenting this SNR parameter on ‘clean’ data would possibly yield source maps of better spatial resolution, but if SNR is set too high, the source maps become extremely sensitive to the slightest amount of noise.
So in practice, we recommend you try to estimate the noise covariance from the data, use and identity-matrix model if its estimation is not straightforward, and keep the SNR parameter to 3, unless you have good reasons to think the noise levels are minimal (as in data from a very large number of trials and strong amplitude modulations above baseline).
The source estimation functions automatically scale the noise covariance matrix by the appropriate factor.
a) If the noise covariance is calculated from the same recordings for which you are estimating the sources: there is no scaling to perform. It doesn’t matter if those files are averages or single trials, if they are filtered or not.
b) If you import single trials and average them in brainstorm: the average file will have a field nAvg set to the intial number of trials. If you compute the noise covariance from the individual trials, and then estimate the sources for the average, the noise covariance will be divided by nAvg.
c) A pragmatic note: we implemented this way of proceeding to be compatible with the MNE manual and software, but the minimum norm is not really sensitive to this scaling in practice. Try to multiply the values by 100, calculate the sources again, observe the difference.