EEG forward and inverse simulations

Now that I have a free moment I am going back and running some EEG simulations to make sure I know the ins and outs of the whole process. I have started by using the MRI of one of my subjects to create a cortical surface from FreeSurfer pial surface files (full and reduced to 2503 vertices) and a BEM with particular parameters. I next imported a generic electrode location file and projected it to the head surface. Then I computed the forward solution and manually exported the LFM and the dipole orientations and locations.

In Matlab I created a “time course” or source activation waveform for each dipole (2503 vertices). This simply consisted of a 1 for each of the three dipole components, weighted by the orientation. Thus at each time frame only a single dipole from the set was active. In future simulations I will create more interesting source activations such as sines or random values, but initially I want to simply look at the so-called “point-spread” of the inverse solution for individual dipoles.

So I have an augmented source activation matrix (7509 x 2503), which I multiplied by the LFM (256 X 7509) to get a simulated scalp EEG (256 x 2503). I then imported this EEG into BST and ran the inverse process, trying all three (dSMP, MN, sLOR). Finally I looked at some of the inverse solutions on the cortical surface. I am confused about what I saw. Only in a small fraction of time frames do I see anything on the cortical surface. I would have expected to see activation near the original vertex for each of the 2503 vertex dipoles.

Can you guess why I am not seeing what I expect? All the simulated dipoles should have the same activation strength.

-Jeff

Hi Jeff,

There are many parameters that can affect your model, including the noise covariance matrix and all the options of the minimum norm model (in “Expert mode”).
Have you paid attention to those parameters?
Does the simulated EEG look correct? (you should see clear effects on the EEG over the simulated areas)

One comment about your method:
You are using an unconstrained model, right? This means that you have 3 * 2503 dipoles (not 2503 dipoles with “three components”).
If you are using a model like that, you could even activate each dipole independently. The size of your source matrix (ImageGridAmp) would be: 7509 dipoles x 7509 "time points"
If not, use a constrained model instead (2503 dipoles x 2503 “time points”).
See how to convert you Gain matrix to a constrained orientation in this thread: http://neuroimage.usc.edu/forums/showthread.php?918

If you really don’t understand what is happening, zip your protocol (File > Export protocol) and send it to me.

Cheers,
Francois

Hi Jeff,

As Francois suggests, how does the forward field look? Use “view topography” and see if each time slice has a nice dipolar pattern across the sensor array. If not, something is wrong elsewhere, and it won’t matter what inverse method you use.

If you’re using a cortically constrained source (which it sounds like you are), then I would expect that some dipolar locations are pointing in inconvenient directions yielding poor dipolar patterns on your EEG array. Others, however, should look great. As Francois suggests, try openin up the orientation to be unconstrained, so that at each location you effectively have an x, y, and z oriented dipole, again looking for nice dipolar patterns in the array.

DSPM and sLORETA are significance scalings of the Minimum Norm. In other words, DSPM and sLOR first run the MN implicitly, then calculate the relative scaling of each point with respect to a baseline. Your simulated data have no noise baseline, apparently, so I’m not sure what result you would get. Since you want to check point-spread functions in noiseless data, you might just stick with the MN for now. But repeating myself, if the EEG spatial pattern doesn’t look dipolar, then the MN won’t be a meaningful interpretation of point-spread.

  • John

Francois and John,
I looked at the scalp topography of the dipoles and they all look OK once I got the display settings right (no thresholding). I think I have the constrained/unconstrained details correct, but from now on I will use a unique number of time points to avoid confusion. I then started looking at the BST forward and inverse solutions and comparing them to the original source activations and simulated EEG. What concerns me most now is the comparison between the simulated EEG (which could have been real for that matter) and the reconstructed EEG done by BST after inverse and forward solutions applied. They are not as well matched as I would have expected. I am not at the right computer to create a picture to send you, but will try later today.

In general min-norm solutions should be able to reconstruct the EEG almost exactly since you are using in this case 2503 parameters to fit 256 scalp EEG potentials. The side constraints simply guide you to a particular solution out of the infinite possible. So I am puzzled as to why in my simulated example the reconstructed EEG is not as good as I would expect.

Looking next at the inverse solution, I understand that the result for each of my individual dipoles may be smeared to a certain extent because the min-norm constraint is not pre-disposed to come up with focal sources. But it does not seem right that some of the fits to single dipoles are so diffuse as to be virtually undetectable.

Regarding manipulation of the inverse solution parameters, there are not many that are relevant since I am using simulated noise-free data. I did try all 3 orientation constraints, and turned off depth-weighting, but none of that helped reduce the RMS error between the input EEG and reconstructed EEG.

Do you know of any references I could look at that actually go into details on the concerns I am bringing up here?

Here is one example showing, for a single dipole, the scalp potential on the vertical axis and channel # on the horizontal axis. The red line is the original EEG, the green line is the reconstructed EEG after inverse and forward processes applied, and the blue line is the difference.

Forward model is 2503 constrained dipoles in default 3-layer BEM. Inverse based on MN with default parameters, zero noise covariance.

Am I mistaken to expect a better agreement between the original EEG and the reconstructed EEG?
-Jeff

Hi Jeff,

You are trying to use a classic “imaging resolution” approach to this kind of data, which in general does not work. Pointing at one of our papers,

S. Baillet, JC Mosher, RM Leahy, Electromagnetic Brain Mapping, IEEE Signal Processing Magazine 18 (6): 14-30, pp 18 Nov 2001. http://dx.doi.org/10.1109/79.962275 (see also http://neuroimage.usc.edu/brainstorm/Pub)

Equation (23) as an example. Let’s eliminate considerations of “W” by setting it to the identity matrix, which as you point out is just one of the many flavors of minimum norms (depth weighting, LORETA, LAURA, etc), thus we are estimating [B]S[/B], the time series, to which you want to reapply [B]A[/B], the lead field matrix, and reconstruct the original data.

In your example, [B]A[/B] is the 256 x 2403 lead field matrix, for 256 channels of EEG data (256 lead fields) evaluated at 2,403 fixed orientation dipolar points in the brain volume. In Equation (23), we have set the noise covariance matrix to identity , with the assumption that the user has already pre-whitened the modeling equations, thus the remaining “noise” parameter is “lambda”, also known as the Tikhonov regularization parameter.

In noiseless data (which does not exist, but let’s pretend), then we set lambda to zero, and our estimate of the 2,403 time series is then

[B]S[/B][SUB]e[/SUB][SUP]T[/SUP] = [B]A[/B][SUP]T[/SUP] ([B]AA[/B][SUP]T[/SUP])[SUP]-1[/SUP][B]M[/B]

where [B]M[/B] are your noiseless measurements. Thus, in a perfect world,

[B]D[/B][SUB]e [/SUB] = [B]A[/B] [B]S[/B][SUB]e[/SUB][SUP]T[/SUP] = [B]AA[/B][SUP]T[/SUP] ([B]AA[/B][SUP]T[/SUP])[SUP]-1[/SUP][B]M[/B]

and thus in a perfect world, then [B]D[/B][SUB]e [/SUB] = [B]M[/B], which is what you are expecting.

Note, however, that (1) it would not matter at all what [B]A[/B] you used, you would always get a perfect estimate, and therefore we have not truly tested the correctness of the calculations in [B]A[/B], which was your original goal. This fault in testing is known as the “inverse crime” where you use the same model matrix in both the forward and inverse procedures. You could just as easily create random matrices of [B]A[/B]. To avoid the inverse crime, you need an independently derived forward model [B]A[/B][SUB]2[/SUB] for the reconstruction,

[B]D[/B][SUB]e [/SUB] = [B]A[/B][SUB]2 [/SUB] [B]S[/B][SUB]e[/SUB][SUP]T[/SUP] = [B]A[SUB]2[/SUB]A[/B][SUP]T[/SUP] ([B]AA[/B][SUP]T[/SUP])[SUP]-1[/SUP][B]M[/B].

(2) This reconstruction requires the inversion ([B]AA[/B][SUP]T[/SUP])[SUP]-1[/SUP] in order to form the initial time series estimate. Look at the eigenvalues of this “Gramian” [B]AA[/B][SUP]T[/SUP] in your lead field model, of the 256 eigenvalues, the last several will be extremely small, many orders of magnitude smaller than is physically plausible, and thus generally rejected in the estimation process. We control this range of rejections through the parameters of noise covariance, whitening, Signal to Noise Ratio. If you set our SNR parameter to 1e15, for example, turn off whitening, turn off regularization, you should find most of your sources passing undisturbed through the estimation, but we may still eliminate some leadfields simply because we still detect them as unstable. But note (1) above, this is overall not a good test anyway

In our equation (23), we control the instability of the Gramian through the parameter “lambda”, which in a Bayesian framework is tied to the SNR, or may simply be considered a Tikhonov regularizer in a more general framework. It automatically suppresses the tiny eigenvalues, so that the estimate is

[B]S[/B][SUB]e[/SUB][SUP]T[/SUP] = [B]A[/B][SUP]T[/SUP] ([B]AA[/B][SUP]T[/SUP] + “lam” [B]I[/B])[SUP]-1[/SUP][B]M[/B]

and thus

[B]D[/B][SUB]e [/SUB] = [B]A[/B] [B]S[/B][SUB]e[/SUB][SUP]T[/SUP] = [B]AA[/B][SUP]T[/SUP] ([B]AA[/B][SUP]T[/SUP]+ “lam” [B]I[/B])[SUP]-1[/SUP][B]M[/B]

The “imaging resolution kernel” is now [B]AA[/B][SUP]T[/SUP] ([B]AA[/B][SUP]T[/SUP]+ “lam” [B]I[/B])[SUP]-1[/SUP], which in general cannot reconstruct all EEG channels.

The theoretical issue is that the Gramian in quasistatic electromagnetics is fundamentally limited in the number of significant eigenvalues, i.e you will never be able to reconstruct all 256 lead fields in your case in any practical sense.

– John

John,

Thanks for the detailed response, which I will try to digest this weekend. For now I will trust the implementation and proceed with my other explorations.

-Jeff