Tutorial 22: Source estimation

Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard M Leahy, Sylvain Baillet

This section describes how to estimate brain activity accounting for scalp recordings.

Background

Estimating brain activity at potentially thousands of brain locations (determined by the forward head model) from much fewer sensor locations is a so-called ill-posed inverse problem. One implication is that an infinite number of source activity patterns may explain equivalently well the sensor data. These aspects are explained in detail here and here.

Such ill-posedness is not specific to EEG/MEG. It is quite typical in many other fields of science and engineering.

There is a vast EEG/MEG literature on the question. Brainstorm features three well-documented types of approaches: minimum-norm imaging, beamforming, and dipole modeling.

One common advantage between these approaches is that they are computationally efficient, even on large datasets. The estimates of brain source activity are derived via a linear recombination of sensor recordings. Brainstorm therefore computes a kernel ("a large matrix") conveniently stored in the database and that can be multiplied with sensor data arrays to obtain source time series, at specific brain locations, or across the entire brain.

Below we first describe the options of the minimum-norm imaging approach, then beamformers and dipole modeling. These latter are technically similar.

Source estimation options

Method

minnorm_options.gif

Minimum-norm (MN) imaging

Beamforming

Dipole modeling

Recommended option

MN imaging variants

minnorm_options_measure.gif

By default, MN imaging estimates the amplitude of brain electrical currents at each grid location determined by the forward head model (i.e., either in volume or on the cortical surface). As discussed here, the currents are expressed in A-m. Brainstorm does not normalize by surface area (A/m, i.e., current surface density) or volume (A/m^2, i.e., current volume density). Nonetheless, we refer to this default setting as yielding a current density map.

To further compensate for the inhomogenous sensitivity of EEG/MEG with depth and orientation of the current flow, we recommend that the current density maps obtained with this option be further standardized using a z-score transformation with respect to a specific time segment of no interest (e.g., pre-stimulus baseline) or experimental condition (e.g., resting-state).

Alternatively, such standardization can be achieved directly with respect to global noise and data covariance statistics via the dSPM and sLORETA options.

Source model: Dipole orientations [TODO]

The current flow of neural activity at each source localtion is modeled by the orientation of an equivalent current dipole. Brainstorm features the following options to determine this orientation:

minnorm_options_orient.gif

Sensors

minnorm_options_sensors.gif

Brainstorm automatically detects the type of sensors (mEg, EEG, etc.) available from the head model selected for source imaging. In the example above, only MEG sensors are available. Select one or all the sensor types available you are interested in.

However, cross-modality calculations -- the fusion between MEG and EEG data to yield a joint source map -- are very sensitive to covariance calculations and head model approximations. As of Spring of 2018, we have also elected to NOT account for cross-covariances between different sensor types. If you wish to obtain a joint, multimodal source model, we recommend that you compute each source map separately and then combine them visually or quantitatively.

Computing sources for an average

We describe here a basic example of how to use Brainstorm to obtain a MN imaging maps of event-related average sensor data.

Display: Cortex surface

Note that these parameters only adjust the visualization of source maps. They do not have effect on the actual source time series.

A few more words about the amplitude threshold parameter:

Recommended post-processing steps

The original source maps may look noisy or patchy. This is due to the strict orientation constraint used in the brain mapping procedure, which emphasizes the sensitivity of brain current strengths to the curvature of the cortex (this effect is more pronounced with MEG than EEG).

Please be cautious not to interpret disconnected colored patches as distinct brain activations without further post processing. The absolute spatial resolution of MEG source mapping is limited (~5-10mm, worse in EEG), although its relative resolution between experimental conditions, hence with post processing, can be much finer (1mm or less, see for instance this retinotopy study).

For now, you may generate smoother versions of the source maps by applying a spatial smoothing process (process "Sources > Spatial smoothing"), or using unconstrained source models, or standardizing source amplitude by applying a z-score transformation with respect to a time period of reference.

Brain maps obtained with dSPM or sLORETA are also standardize, more immune to orientation confounds (see below for more detail).

Display: MRI Viewer

display_smooth.gif

Display: MRI 3D

display_mri3d.gif

Sign of constrained maps

Source brain maps consist of time series that are complex curves of positive and negative values.

You can visualize how the sign of source amplitudes is distributed across the cortex using the cortical display of sources: set the amplitude threshold to 0%, then make sure the colormap shows relative (i.e., both positive and negative) values. For this, right click over the colorbar Colormap: Sources > uncheck the "Absolute values" option. At any time, you can double-click on the colorbar to reset the colormap options to default values.

As shown below, a typical brain map will show stripes of positive and negative values, with sign changes around sulcal locations. This is another manifestation of the limited absolute spatial resolution of MEG/EEG source mapping. Sources of opposite sides of sulcus are oriented in opposite directions by default. Source mapping shows they have oppositive signs meaning that the respective neural currents are estimated as flowing in the same direction. We will see later how this sign ambiguity can be managed via either the processing of rectified source time series (if you wish to map source amplitude effects only). It is crucial to preserve the sign though if you are in interested in frequency specific brain activity, such as spectral, time-frequency and connectivity analyses.

More on sign ambiguity: On opposite walls of a sulcus, brain source are very close to each other, with opposite orientations. If the true brain activity sits only on one side of a sulcus as shown below with a green arrow, the MN-imaging brain map lower spatial resolution will spread the estimated currents over multiple nearby locations, shown with the red and blue arrows below, which have opposite default directions that are imposed by anatomy (dipoles pointing outwards the cortical surface). The signs of the current flows will be opposite, with positive values (red arrows) on one side of the sulcus and negative values on the other side (blue arrows).

For visualization purposes, we are mostly interested at this stage in visualizing the magnitude of brain activity, hence the default colormap option "absolute values" being selected.

Unconstrained orientations

The "loose constraints" and "unconstrained" options for source orientations yield 3 time series per brain location (from three orthogonal elementary sources), which increases the dimensionality of the source maps, hence complexify their interpretation, but produces smoother renderings of current flows. We recommend these options when using an MRI template instead of the individual MRI volume of study participants, or when studying subcortical brain structures. Unconstrained maps can subsequently be converted to "flat" maps, i.e. a single orientation per location, which may be needed for some processes.

Here we will illustrate the fully unconstrained case. The procedure for the loose constraints options is similar.

Standardization of source maps

Standardization procedures can compensate some of the bias of MN imaging source maps towards superficial source locations (in both MEG and EEG) and radially oriented current flows (in MEG). It also enables a fairer comparison of brain activity between individuals, based on its relative change with a data segment of reference.

Reference data segments can be extracted from empty-room recordings (MEG), pre-stimulus baseline or resting state data (MEG and EEG).

dSPM and sLORETA proceed to such standardization within their respective source mapping procedures. Brainstorm also features a Z-score normalization process, which enables a versatile definition of the reference data segment.

Source map standardization does not alter the dynamics of the source time series and only scales their respective amplitude changes. The scaling factors are different at each brain location, hence the resulting source maps will look different than the original MN images, but with the same temporal dynamics.

dSPM, sLORETA (embedded standardization)

Z-score transformation of brain maps

Delete your experiments

Before we proceed with the rest of the tutorial, please delete the source maps computed so far.

Computing sources for single trials

MN imaging models are linear: their respective imaging kernels can be pre-computed from anatomy and sensor locations only, then can be applied at once on long, ongoing or several epoched data segments on the fly. Here we show how to obtain source maps across multiple single trial epochs.

Averaging in source space

Computing the average

Visualization filters

Low-pass filter

Z-score normalization

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

Averaging normalized values

Averaging normalized source maps within a single subject requires more attention than averaging current density maps. Since averaging reduces variance, the resulting source maps will have a different statistical distribution than the nominal distribution of the individual maps.

For example, averaging z-score normalized maps will result in maps with variance less than 1. The same holds true for dSPM maps. Assuming independent samples, the variance of an average of N maps drops by 1/N. For this reason, it is generally recommended to select the "Weighted average" option in the ‘Average files’ process when averaging trials or source maps (which performs mean(x) = (N1*sum(x1(i)) + N2*sum(x2(i)) + …)/ (N1+N2+…) ) in order to keep track of the number of samples and the actual variance of averaged statistical maps.

dSPM

Z-score

sLORETA

Advanced

Display: Contact sheets and movies

A good way to represent what is happening in time is to generate contact sheets or videos. Right-click on any figure and go to the menu Snapshot to check out all the possible options. For a nicer result, take some time to adjust the size of the figure, the amplitude threshold and the colormap options (hiding the colorbar can be a good option for contact sheets).

A time stamp is added to the captured figure. The size of the text font is fixed, so if you want it to be readable in the contact sheet, you should make you figure very small before starting the capture. The screen captures below where produced with the colormap "hot".

Advanced

Model evaluation

One way to evaluate the accuracy of the source reconstruction if to simulate recordings using the estimated source maps. This is done simply by multiplying the source time series with the forward model:
MEG_simulated [Nmeg x Ntime] = Forward_model [Nmeg x Nsources] * MN_sources [Nsources x Ntime]
Then you can compare visually the original MEG recordings with the simulated ones. More formally, you can compute an error measure from the residuals (recordings - simulated).

To simulate MEG recordings from a minimum norm source model, right-click on the source file, then select the menu "Model evaluation > Simulate recordings".

Open side-by-side the original and simulated MEG recordings for the same condition:

Advanced

Advanced options: Minimum norm

Right-click on the deviant average for Run#01 > Compute sources [2018].
Click on the button [Show details] to bring up all the advanced minimum norm options.

minnorm_details.gif

Depth weighting

Briefly, the use of various depth weightings was far more debated in the 1990s, before the introduction of MNE normalization via dSPM, sLORETA, and other "z-scoring" methods, which mostly cancel the effects of depth weighting (put another way, after normalization min norm results tend to look quite similar whether depth weighting is used or not).

By modifying the source covariance model at each point in the source grid, deeper sources are "boosted" to increase their signal strength relative to the shallower dipoles; otherwise, the resulting MNE current density maps are too dominated by the shallower sources. If using dSPM or sLORETA, little difference in using depth weighting should be noted. To understand how to set these parameters, please refer to the MNE manual. (options --depth, --weightexp and --weightlimit).

Noise covariance regularization [TODO]

MNE and dipole modeling are best done with an accurate model of the noise covariance, which is generally computed from experimental data. As such, these estimates are themselves prone to errors that arise from relatively too few data points, weak sensors, and strange data dependencies that can cause the eigenspectrum of the covariance matrix to be illconditioned (i.e. a large eigenvalue spread or matrix condition number). In Brainstorm, we provide several means to "stabilize" or "regularize" the noise covariance matrix, so that source estimation calculations are more robust to small errors.

Regularization parameter [TODO]

In minimum norm estimates, as mentioned above in the comparisons among methods, the data covariance matrix is essentially synthesized by adding the noise covariance matrix to a modeled signal covariance matrix. The signal covariance matrix is generated by passing the source prior through the forward model. The source prior is in turn prescribed by the source model orientation and the depth weighting.

A final regularization parameter, however, determines how much weight the signal model should be given relative to the noise model, i.e. the "signal to noise ratio" (SNR). In Brainstorm, we follow the definition of SNR as first defined in the original MNE software of Hamalainen. The signal covariance matrix is "whitened" by the noise covariance matrix, such that the whitened eigenspectrum has elements in terms of SNR (power). We find the mean of this spectrum, then take the square root to yield the average SNR (amplitude). The default in MNE and in Brainstorm is "3", i.e. the average SNR (power) is 9.

Output mode

As mentioned above, these methods create a convenient linear imaging kernel that is "tall" in the number of elemental dipoles (one or three per grid point) and "wide" only in the number of sensors. At subsequent visualization time, we efficiently multiply the kernel with the data matrix to compute the min norm images.

For some custom purposes, however, a user may find it convenient to pre-multiply the data matrix and generate the full source estimation matrix. This would only be recommended in small data sets, since the full results can become quite large.

Advanced options: LCMV beamformer

As mentioned in the introduction above, two other methods can be selected for source estimation, a beamformer and dipole modeling. In this section, we review the options for the beamformer. On top of the noise covariance matrix, you need to estimate a data covariance matrix in order to enable the option "LCMV beamformer" in the interface.

Note that pre-whitening with the noise covariance matrix has not yet been implemented for the LCMV beamformer, and only the data covariance is used in the current version. The noise covariance has no impact on the LCMV beamformer results. However, if there is no noise covariance file available in the database, the "Compute sources" interface returns an error: to go around this limitation, you may select the "No noise modeling (identity matrix)" option in the contextual menu for the noise covariance.

lcmv_options.gif

Measure

The only option "Pseudo Neural Activity Index" (PNAI), is named after the definition of the Neural Activity Index (NAI). We have modified Van Veen’s definition to rely strictly on the data covariance, without need for a separate noise covariance matrix, but the basic premise is the same as in dSPM, sLORETA, and other normalizations. Viewing the resulting "map," in an identical manner to that with MNE, dSPM, and sLORETA described above, reveals possibly multiple sources as peaks in the map. Note that PNAI scores are unitless, because they correspond to standardized statistical fluctuations of neural currents. In this way, PNAI scores are analogous to z-scoring.

Dipole orientations

We recommend you choose "unconstrained" and let the later Dipole scanning process, which finds the best fitting dipole at each time point, optimize the orientation with respect to the data.

Data covariance regularization

Same definitions as in MNE, only applied to the data covariance matrix, rather than the noise covariance matrix. Our recommendation is to use median eigenvalue.

Advanced options: Dipole modeling

Dipole modeling fits a single dipole at each potential source location to produce a dipole scanning map. This map can be viewed as a indication of how well, and where, the dipole fits at each time point. However, we recommend using the subsequent best-dipole fitting routine (dipole scanning) to determine the final location and orientation of the dipole (one per time point). Please note that this function does not fit multiple simultaneous dipoles.

Although not widely recognized, dipole modeling and beamforming are more alike than they are different – when comparing the inverse operators required to compute the dipole scanning map (dipole modeling) and the beamformer output map (LCMV), we see that they differ only in that the former uses an inverse noise covariance matrix while the latter replaces this with the inverse of the data covariance.

dipoles_options.gif

Measure

This field is now missing, but the resulting imaging kernel file is directly analogous to the PNAI result from LCMV beamforming. The user can display this scanning measure just as with the LCMV case, where again the normalization and units are a form of z-scoring.

Dipole orientations

Use "unconstrained source" modeling and let the process "dipole scanning" optimize the orientation of the dipole for every time instance.

Noise covariance regularization

Similarly, use "median eigenvalue".

The tutorial "MEG current phantom (Elekta)" demonstrates dipole modeling of 32 individual dipoles under realistic experimental noise conditions.

Advanced

Combining MEG+EEG for source estimation

Magnetoencephalography and EEG sensor data can be processed jointly to produce combined source estimates. Joint processing presents unique challenges because EEG and MEG use head models that exhibit differing sensitivities to modeling errors, which can in turn lead to inconsistencies between EEG and MEG with respect to the (common) source model. In practice joint processing is relatively rare (Baillet et al., 1999). However, these data are complementary, which means that joint processing can potentially yield insights that cannot be seen with either modality alone.

For example, in the evoked responses in the data set used here, the first peak over the occipital areas is observed in MEG (90 ms) slightly before EEG (110 ms). This delay is too large to be caused by acquisition imprecisions. This indicates that we are not capturing the same brain processes with the two modalities, possibly because the orientation and type of activity in the underlying cortical sources is different.

MEG and EEG have different sensitivities to source orientation and depth. Given the challenges of joint processing, our advice is to first look at the source reconstructions for the two modalities separately before trying to use any type of fusion technique.

Advanced

On the hard drive

Constrained shared kernel

Right-click on a shared inverse file in the database explorer > File > View file contents.

Structure of the source files: results_*.mat

Mandatory fields:

Optional fields:

Full source maps

In Intra-subject, right-click on one of the normalized averages > File > View file contents.

This file has the same structure as a shared inverse kernel, with the following differences:

Filename tags

Useful functions

Additional documentation

Articles

Tutorials

Forum discussions








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/SourceEstimation (last edited 2024-03-20 20:35:53 by RaymundoCassani)