Tutorial 22: Source estimation

[UNDER CONSTRUCTION]

Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard Leahy, Sylvain Baillet

You have in your database a forward model matrix that explains how the cortical sources determine the values on the sensors. This is useful for simulations, but what we need is to build the inverse information: how to estimate the sources when we have the recordings. This tutorials introduces the tools available in Brainstorm for solving this inverse problem.

Ill-posed problem

Our goal is to estimate the activity of the 45,000 dipoles described by our forward model. However we only have a few hundreds of variables (the number of sensors). This inverse problem is ill-posed, there is an infinity of combinations of source activity that can generate exactly the same sensor topography. Inverting the forward problem directly is impossible, unless we add some strong priors in our model.

Wikipedia says: "Inverse problems are some of the most important and well-studied mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields."

Many solutions have been proposed in the literature, based on different assumptions on the way the brain works and depending on the amount of information we already have on the effects we are studying. Among the hundreds of methods available, two classes of inverse models have been widely used in MEG/EEG source imaging in the past years: minimum-norm solutions and beamformers.

Both approaches have the advantage of being linear: the activity of the sources is a linear recombination of the MEG/EEG recordings. It is possible to solve the inverse problem independently of the recordings, making the data manipulation a lot easier and faster.

Both are available in Brainstorm, so you can use the one the most adapted to your recordings or to your own personal expertise. Only the minimum norm estimates will be described in this tutorial, but the other solutions work exactly in the same way.

Source estimation options [TODO]

Before we start estimating the sources for the recordings available in our database, let's start with an overview of the options available. The screen capture below represents the basic options for the minimum norm estimates. The options for the other methods will be described in advanced tutorials.

minnorm_options.gif

Method

Measure

The minimum norm estimates gives a measure of the current density flowing at the surface of the cortex. To visualize these results and compare them between subjects, we can normalize the MNE values to get a standardized level of activation with respect to the noise or baseline level (dSPM, sLORETA, MNp).

Source orientation

Computing sources for a single data file

Display: Cortex surface

  1. Double-click on recordings Right / ERF, to display the time series (always nice to have a time reference).

  2. Double-click on sources Right / ERF / MN: MEG.
    Equivalent to right-click > Cortical activations > Display on cortex.

  3. Go to the main peak around 46ms (by clicking on the times series figure)

    [ATTACH]

  4. Then you can manipulate the sources display exactly the same way as the surfaces and the 2D/3D recordings figures: rotation, zoom, Surface tab(smoothing, sulci, resection...), colormap, sensors, predefined orientations (keys from 0 to 7)...

  5. Three new controls are available in the Surfaces tab, in panel Data options:

    • Amplitude: Only the sources that have a value superior than a given percentage of the colorbar maximum are displayed.

    • Min. size: Hide all the small activated regions, ie. the connected color patches that contain a number of vertices smaller than this "min.size" value.

    • Transparency: Change the transparency of the sources on the cortex.

  6. Take a few minutes to understand what this threshold value represents.

    • The colorbar maximum depends on the way you configured your Sources colormap. In case the colormap is NOT normalized to current time frame, and the maximum is NOT set to a specific value, the colorbar maximum should be around 68 pA.m.

    • On the screenshot above, the threshold value was set to 35%. It means that only the sources that had a value over 0.35*68 = 23.8 pA.m were visible.
    • If you set the threshold to 0%, you display all the sources values on the cortex surface; and as most of the sources have values close to 0, the brain is mainly blue.
    • Move the slider and look for a threshold value that would give you a really focal source.The following figures represent the sources activations at t=46ms respectively with threshold at 0% and 90%.

      [ATTACH] [ATTACH]

    • The figure on the right shows the most active area of the cortex 46ms after an electric stimulation of the right thumb. As expected, it is localized in the left hemisphere, in the middle of post central gyrus (projection of the right hand in the primary somatosensory cortex).

Display: MRI 3D

  1. Close all the figures (Close all button). Open the time series view for Right / ERF.

  2. Right-click on Right / ERF / MN: MEG > Cortical activations > Display on MRI (3D).

  3. This view was also introduced in the tutorial about MRI and surface visualization. Try to rotate, zoom, move the slices, move in time, change the threshold.

    [ATTACH] [ATTACH]

  4. A new menu is available in the popup menu of this figure: MRI Display
    • MIP Anatomy: for each slice, display the maximum value over all the slices instead of the original value in the structural MRI (fig 1)

    • MIP Functional: same thing but with the layer of functional values (fig 2)

    • Smooth level: The sources values are smoothed after being re-interpolated in the volume. These menus define the size of the smoothing kernel (fig2: smooth=2; fig3: smooth=5).

      [ATTACH] [ATTACH] [ATTACH]

Display: MRI Viewer

  1. Right-click on Right / ERF / MN: MEG > Cortical activations > Display on MRI (MRI Viewer).

  2. This view was also introduced in the tutorial about MRI and surface visualization. Try to move the slices (sliders, mouse wheel, click on the views), move in time, change the threshold.

    [ATTACH]

Display: Contact sheets and movies

Minimum norm values are not only positive

You should pay attention to a property of the current amplitudes that are given by the wMNE method: they can be positive of negative, and they oscillate around zero. It's not easy to figure out what is the exact meaning of a negative value respect with a positive value, and most of the time we are only interested in knowing what is activated at what time, and therefore we look only at the absolute values of the sources.

In some other cases, mainly when doing frequency analysis, we need to pay attention to the sign of these values. Because we cannot do a frequency decomposition of the absolute values of the sources, we need to keep the sign all along our processes.

Display again the sources for Right / ERF on the cortex surface (double-click on the source file), and uncheck the Absolute option for the colormap "Sources" (right-click on the figure > Colormap Sources > Absolute values). Decrease the threshold to observe the pattern of alternance between positive and negative values on the surface. Then double click on the colorbar to reset it to its default.

[ATTACH]

Source map normalization

Typical recommendations:

wMNE: constrained (kernel and full), and unconstrained

[ATTACH] [ATTACH] [ATTACH]

dSPM, sLORETA:

[ATTACH] [ATTACH]

Now delete all these files when you're done, and keep only the initial solution: wMNE, Constrained.

Computing sources for multiple data files

UNCONSTRAINED MN

The sources file we are observing was computed as an inversion kernel. It means that we can apply it to any similar recordings file (same subject, same run, same positions of sensors). But in our TutorialCTF database, the MN: MEG node only appears in the the ERF file, not in the Std one. What is it necessary to share an inversion kernel between different recordings ?

  1. Compute another source estimation: but instead of clicking on the Compute sources from the ERF recordings popup menu (which would mean that you only want sources for this particular recordings file), get this menu from the Right condition. This means that you want the inversion model to be applied to all the data in the condition.

  2. Select "Minimum Norm Imaging", click on Run.
  3. Three new nodes are available in the tree:

    [ATTACH] [ATTACH]

  4. The actual inversion kernel you have just computed (1), contains the same information as the one from the previous section (Computing sources for a single data file). Note that you cannot do anything with this file: if you right-click on it, you can see that there are no Display menu for it.

    • Two links (2) that allow you apply this inversion kernel to the data files available in this condition (ERF and Std). In their popup menus, there are all the display options introduced in the previous section.

    • These links are not saved as files but as specific strings in the database: "link|kernel_file|data_file". This means that to represent them, one should load the shared kernel, load the recordings, and multiply them.
    • The sources for the Std file do not have any meaning, do not even try to open it. It was just to illustrate the way a kernel is shared

  5. Double-click on both sources files available for Right / ERF (link and non-link), and verify at many different times that the cortical maps are exactly the same in both cases.

  6. You can estimate the sources for many subjects or conditions at once, as it was explained for the head models in previous tutorial: the Compute sources menu is available on all the subjects and conditions popup menus.

  7. Delete the shared kernel (1), we don't need redundant and confusing information for the next steps. Observe that both links disappear at the same time.

Average in source space

Advanced

Advanced options

Let's introduce briefly the other options offered for the source estimation. Right-click again on Right / ERF > Compute sources. Click on "Expert mode", you see more options appearing in the window. If you click on Run, you have access to all the options of the wMNE algorithm.

[ATTACH] [ATTACH]

Beamformer options

Dipole fitting options

Output mode

Signal properties

Noise covariance matrix

Depth weighting

The minimum-norm estimates have a bias towards superficial currents. This tendency can be alleviated by adjusting these parameters. To understand how to set these parameters, please refer to the MNE manual (options --depth, --weightexp and --weightlimit).

Advanced

Equations

TODO: John

Advanced

Rey on sLORETA

Yes in sLORETA the noise covariance is not used at all for the standardization process. It can be used modeling correlated noise and whitening, but that is optional.

I have noticed that a lot of folks are confused about this and I have seen many statements in papers spreading this awful confusion. The sLORETA is standardized by the resolution matrix (diagonal for dipole orientations constraints, or block diagonals for free orientations).

That is why sLORETA has zero localization bias for ALL point-spread functions, and why I always prefer sLORETA over dSPM, MNE, or any beamformer. This is all in the math .... but ..... just so that you know Fas Hsuan Lin's paper comparing sLORETA with dSPM, and MNE has a big mistake, the assumed source covariance matrix is not the identity matrix, and that violates the beauty of the math and results in non-zero localization bias. That's why in Brainstorm the prior source covariance matrix used for sLORETA automatically uses no depth bias compensation (identity matrix). sLORETA accomplishes depth bias compensation via the resolution matrix, NOT via the prior source covariance matrix. Trying to use a depth exponent of 0.7 or 0.8 like we do for MNE and dSPM will mess up sLORETA.You will not find this in a paper, but I checked it all out many years ago. This is critical.

Explain sLORETA units (see email exchanges from Feb 2015)

Advanced

Issues with dSPM average

Average(dSPM) is NOT equal to dSPM(Average).

There is no problem for the MNE and sLORETA solutions, because the scaling of the noise covariance matrix doesn't impact the results.
    wMNE(Data, NoiseCov) = wMNE(Data, NoiseCov / N)
So when we average we get:
    Average(wMNE(Trials, NoiseCov)) = wMNE(Average, NoiseCov) = wMNE(Average, NoiseCov / N)

But for dSPM we have:
    dSPM(Data, NoiseCov) = dSPM(Data, NoiseCov / N) ./ sqrt(N)
So when we average we get:
    Average(dSPM(Trials, NoiseCov)) = dSPM(Average, NoiseCov) = dSPM(Average, NoiseCov / N) ./ sqrt(N)

Rey: "Basically, the dSPM value at each location is equal to the wMNE value divided by the projection of the estimated noise covariance matrix onto each source point. After whitening, the operational noise covariance matrix is by definition the identity matrix, and hence the projection of the noise is equal to the L2 norm of the row vector of the wMNE inverse operator (in the case of fixed dipole orientations). So, dSPM is what you get when the rows of the wMNE inverse operator all have unit norm (i.e., they all point in different directions but lie in a unit hyper-sphere)."

Rey: "dSPM is really a source mapping of SNR, not of activity. Hence, it's not all so surprising that the single trial SNR maps are smaller...

"Rey: "Perhaps, dSPM should be used only for averaged data (i.e., ERF, ERP), at least until it's all figure out. In a way, dSPM is just MNE followed by the noise normalization. Thus, you could do all the single trial processing with the MNE algorithm, and only do the noise normalization when needed (e.g., after averaging or on single trials only if they are not going to be averaged)."

On the hard drive

Document file tags

Document file structure

References

Additional discussions on the forum








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/SourceEstimation (last edited 2015-08-11 19:17:31 by FrancoisTadel)