34994
Comment:
|
67160
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
'''[UNDER CONSTRUCTION]''' ''Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard Leahy, Sylvain Baillet'' You have in your database a forward model matrix that explains how the cortical sources determine the values on the sensors. This is useful for simulations, but what we need is to build the inverse information: how to estimate the sources when we have the recordings. This tutorials introduces the tools available in Brainstorm for solving this inverse problem. |
''Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard M Leahy, Sylvain Baillet'' You have in your database a forward model that explains how the cortical sources determine the values on the sensors. This is useful for simulations, but what we need next is to solve the inverse problem: how to estimate the sources when we have the recordings. This tutorial introduces the tools available in Brainstorm for solving this inverse problem. (For backward compatibility, see the [[Tutorials/TutSourceEstimation|old tutorials]]) |
Line 11: | Line 9: |
Our goal is to estimate the activity of the 45,000 dipoles described by our forward model. However we only have a few hundreds of variables (the number of sensors). This inverse problem is ill-posed, there is an '''infinity of combinations''' of source activity that can generate exactly the same sensor topography. Inverting the forward problem directly is impossible, unless we add some strong priors in our model. | Our goal is to estimate the activity of the thousands of dipoles described by our forward model. However we only have a few hundred spatial measurements as input (the number of sensors). This inverse problem is ill-posed, meaning there are an '''infinite number of '''source activity patterns that could generate exactly the same sensor topography. Inverting the forward model directly is impossible, unless we add some strong priors to our model. |
Line 17: | Line 15: |
Many solutions have been proposed in the literature, based on different assumptions on the way the brain works and depending on the amount of information we already have on the effects we are studying. Among the hundreds of methods available, two classes of inverse models have been widely used in MEG/EEG source imaging in the past years: '''minimum-norm solutions''' and '''beamformers'''. Both approaches have the advantage of being '''linear''': the activity of the sources is a linear recombination of the MEG/EEG recordings. It is possible to solve the inverse problem independently of the recordings, making the data manipulation a lot easier and faster. Both are available in Brainstorm, so you can use the one the most adapted to your recordings or to your own personal expertise. Only the minimum norm estimates will be described in this tutorial, but the other solutions work exactly in the same way. == Source estimation options [TODO] == Before we start estimating the sources for the recordings available in our database, let's start with an overview of the options available. The screen capture below represents the basic options for the minimum norm estimates. The options for the other methods will be described in advanced tutorials. {{attachment:minnorm_options.gif}} |
Many solutions to the inverse problem have been proposed in the literature, based on different assumptions on the way the brain works and depending on the amount of information we already have on the effects we are studying. Among the many methods available, in Brainstorm, we present three general approaches to the inverse problem that represent the most widely used methods in MEG/EEG source imaging: '''minimum-norm solutions''', '''beamformers''', and '''dipole modeling'''. These approaches have the advantage of being implemented in an efficient '''linear''' form: the activity of the sources is a linear recombination of the MEG/EEG recordings, such that it is possible to solve the inverse problem by applying a '''linear kernel''' (in the form of a matrix that multiples the spatial data at each point in time) which is easily stored. Subsequent data manipulation and source visualization is then much simpler, as are comparisons among these techniques. Below we first describe the minimum norm imaging approach and its options, followed by the beamformer and dipole modeling, both of which are actually quite similar and only use a subset of the options available in the minimum norm approach. == Source estimation options == Before we start estimating the sources for the recordings available in our database, let's start with an overview of the options available. This section focuses on the options for the minimum norm estimates. The other methods are described in advanced sections at the end of this page. |
Line 29: | Line 25: |
* '''Minimum norm''': __Priors, justification, application case?__<<BR>> Require an estimation of the noise at the level of the sensors (noise covariance matrix). * '''Dipole modeling''': __?__ * '''LCMV beamformer''': __?__<<BR>>Require both a noise covariance matrix and a data covariance matrix (representation of the effect we are trying to localize in the brain, covariance of the latencies of interest). * __'''Recommended option'''__: Provided that we know at what latencies to look at, we can compute a correct data covariance matrix and may obtain a better spatial accuracy with a beamformer. However, in many cases we don't exactly know what we are looking at, the risks of misinterpretation of badly designed beamforming results are high. Brainstorm tends to favor minimum norm solutions, which have the advantage of needing less manual tuning for getting acceptable results. === Measure === The minimum norm estimates are a measure of the current density flowing at the surface of the cortex. To visualize these results and compare them between subjects, we can normalize the MNE values to get a standardized level of activation with respect to the noise or baseline level (dSPM, sLORETA, MNp). * '''Current density map''': Whitened and depth-weigthed linear L2-minimum norm estimates algorithm inspired from Matti Hamalainen's MNE software. For a full description of this method, please refer to the [[http://www.nmr.mgh.harvard.edu/meg/manuals/MNE-manual-2.7.pdf|MNE manual]], section 6, "The current estimates". <<BR>>Units: picoamper per meter (pA.m). * '''dSPM''': Noise-normalized estimate (dynamical Statistical Parametric Mapping [Dale, 2000]). Its computation is based on the MNE solution. <<BR>>Units: Unitless ratio [ '''???''' ] * '''sLORETA''': Noise-normalized estimate using the sLORETA approach (standardized LOw Resolution brain Electromagnetic TomogrAphy [Pasqual-Marqui, 2002]). sLORETA solutions have in general a smaller location bias than either the expected current (MNE) or the dSPM. <<BR>>Units: [ '''???''' ] * '''MNp''': ? <<BR>>Units: Unitless ratio [ '''???''' ] * __'''Recommended option'''__: Discussed in the section "Source map normalization" === Source orientation === * '''Constrained: Normal to cortex''': Only one dipole at each vertex of the cortex surface, oriented normally to the surface. This is based on the anatomical observation that in the cortex, the neurons are mainly organized in macro-columns that are perpendicular to the cortex surface.<<BR>>Size of the inverse operator: [nVertices x nChannel]. * '''Constrained: Optimal orientation''': Only one dipole at each vertex of the cortex surface, oriented normally to the surface. This is based on the anatomical observation that in the cortex, the neurons are mainly organized in macro-columns that are perpendicular to the cortex surface.<<BR>>Size of the inverse operator: [nVertices x nChannel]. * '''Unconstrained''': At each vertex of the cortex surface, we define a base of three dipoles with orthogonal directions, then we estimate the sources for the three orientations independently. <<BR>>Size of the inverse operator: [3*nVertices x nChannel]. * '''Loose''': A version of the "unconstrained" option with a weak orientation constraint that emphasizes the importance of the sources with orientations that are close to the normal to the cortex. The value associated with this option set how "loose" should be the orientation constrain (recommended values in MNE are between 0.1 and 0.6, --loose option). <<BR>>Size of the inverse operator: [3*nVertices x nChannel]. * __'''Recommended option'''__: ?<<BR>>The constrained options use one dipole per orientation instead of three, therefore the source maps are smaller, faster to compute and display, and much more intuitive to process because we don't have to think about recombining the three values in one. However the normal orientation constraint is most of the time too strong and not realistic. <<BR>>Unconstrained sources look smoother and nicer but are not necessarily more accurate. == Sources for a single data file (constrained) == * In Run#01, right-click on the average response for the '''deviant''' stim > '''Compute sources [2015]'''.<<BR>>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Constrained''': Normal to cortex. <<BR>><<BR>> {{attachment:minnorm_single.gif||height="462",width="492"}} * The other menu "Compute sources" brings the interface that was used previously in Brainstorm. We are going to keep maintaining the two implementations in parallel for a while for compatibility and cross-validation purposes. * The result of the computation is displayed as a dependent of the deviant average because it is related only to this file. In the file comment, "MN" stands minimum norm and "Constr" stands for "Constrained: normal orientation". <<BR>><<BR>> {{attachment:minnorm_single_tree.gif}} |
<<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:minnorm_options.gif}} <<HTML(</A></div>)>> '''Minimum norm imaging''' * Estimates the sources as the solution to a linear imaging problem, that can be interpreted in various ways ([[https://en.wikipedia.org/wiki/Tikhonov_regularization|Tikhonov regularization]], [[https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation|MAP estimation]]). The method finds a cortical current source density image that approximately fits the data when mapped through the forward model. The "illposedness" is dealt with by introducing a regularizer or prior in the form of a source covariance that favors solutions that are of minimum energy (or L2 norm). * Min norm requires specification of a noise and a source covariance matrix. Users can estimate a noise covariance matrix directly from recordings (for example, using pre-stim recordings in event related studies) or simply assume a white-noise identify matrix covariance as [[http://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Noise_covariance_regularization|described below]].The source covariance prior is generated from the options discussed in detail below. * In contrast to the LCMV beamformer, in which the data covariance is estimated directly from the data, for minimum norm the data covariance is determine by the choice of source and data covariances and the forward model. '''LCMV beamformer''' * Linearly constrained minimum variance (LCMV) beamformers compute an estimate of source activity at each location through spatial filtering. The spatial data are linearly combined with weights (the spatial filter) chosen separately for each location to ensure that the strength of a dipolar source at that location is correctly estimated (assuming a perfect head model). * The remaining degrees of freedom in selecting the weights are used to minimize the total output power. This has the effect of suppressing contributions of sources from other locations to the estimated signal at the location of interest. * It should be noted, however, that correlation between sources can at times lead to partial or full signal cancellation, and the method can be sensitive to accuracy of the head model. * LCMV beamformers require specification of the data covariance matrix, which is assumed to include contributions from background noise and the brain signals of interest. In practice, the data covariance is estimated directly from the recordings. A linear kernel (matrix) is formed from this data covariance matrix and the forward model. This kernel defines the spatial filters applied at each location. Multiplying by the data produces an output beamformer scanning image. These images can either be used directly, as is common practice with LCMV methods, or the largest peak(s) can be fit with a dipolar model at every time instance. [[http://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Advanced_options:_LCMV_beamformer|See section below on LCMV Beamformer Modeling]]. '''Dipole modeling [TODO]''' * In some sense this is the simplest model: we fit a single current dipole at each point in time to the data. We do this by computing a linear kernel (similar to the min norm and LCMV methods) which when multiplied by the data produces a dipole scanning image whose strongest peak represents the most likely location of a dipolar source. * As with LMCV, the dipole scanning images can be viewed directly, or the single best dipole fit (location and orientation) computed, as described in ('''LINK ?'''). [[http://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Advanced_options:_Dipole_modeling|More details]]. __'''Recommended option'''__ * Still under much debate, even among our Brainstorm team. In cases where sources are expected to be focal (e.g. interictal spikes in epileptic patients, or early components of sensory evoked responses) the single dipole can be precise in terms of localization. For cases where sources are expected to be distributed, the min norm method makes the least restrictive source assumptions. LCMV beamformers fall somewhere between these two cases. * One advantage of Brainstorm is that all three approaches can be easily run and compared. If the results are concordant among all three techniques, then our underlying assumptions of source modeling, head modeling, and data statistics are confirmed. If the results are disparate, then a more in depth study is needed to understand the consequences of our assumptions and therefore which technique may be preferred. The next several sections discuss in detail the options associated with the "mininum norm imaging" method. === Measure [TODO] === <<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:minnorm_options_measure.gif}} <<HTML(</A></div>)>> The minimum norm estimate computed by Brainstorm represents a measure of the current found in each point of the source grid (either volume or surface). As discussed [[http://neuroimage.usc.edu/forums/showthread.php?1246-Doubt-about-current-density-units-pA.m-or-pA-m2|on the user forum]], the units are strictly kept in A-m, i.e. we do not normalize by area (yielding A/m, i.e. a surface density) or volume (yielding A/m^2, i.e. a volume density). Nonetheless, it is common to refer these units as a "source density" or "current density" maps when displayed directly. More commonly, however, current density maps are normalized. The value of the estimated current density is normalized at each source location by a function of either the noise or data covariance. Practically, this normalization has the effect of compensating for the effect of depth dependent sensitivity and resolution of both EEG and MEG. Current density maps tend to preferentially place source activity in superficial regions of cortex, and resolution drops markedly with sources in deeper sulci. Normalization tends to reduce these effects as nicely shown by ('''LINK ?'''). We have implemented the two most common normalization methods: dSPM and sLORETA. * '''Current density map''': Produces a "depth-weighted" linear L2-minimum norm estimate current density using the method also implemented in Matti Hamalainen's MNE software. For a full description of this method, please refer to the [[http://www.nmr.mgh.harvard.edu/meg/manuals/MNE-manual-2.7.pdf|MNE manual]], section 6, "The current estimates". Units: picoamper-meter (pA-m). * '''dSPM''': Implements dynamical Statistical Parametric Mapping (Dale, 2000). The MNE is computed as above. The noise covariance and linear inverse kernel are then used to also compute estimates of noise variance at each location in the current density map. The MNE current density map is normalized by the square root (standard deviation) of these variance estimates. As a result dSPM gives a z-score statistical map. Units: unitless "z". * '''sLORETA''': Standardized LOw Resolution brain Electromagnetic TomogrAphy (Pasqual-Marqui, 2002). As with dSPM, the MNE current density map is normalized at each point. While dSPM computes the normalization based on the noise covariance, sLORETA replaces the noise covariance with the theoretical data covariance, as is assumed in the minimum norm estimation. The theoretical data covariance is the noise covariance plus the theoretical signal covariance. As discussed in (Pasqual-Marqui 2002), this theoretical data covariance simplifies sLORETA to an alternative form that results in a "resolution" kernel (eq.(17) of (Pasqual-Marqui 2002). (We note that the theoretical data covariance is not the experimental data covariance estimated directly from the data, as is used in beamformers). Units: unitless. * '''__Recommended option__''': Discussed in the section [[http://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Source_map_normalization|Source map normalization]] below. === Source model: Dipole orientations [TODO] === At each point in the source grid, the current dipole may point arbitrarily in three directions. In this section of the options, we describe alternatives for constraining orientation: <<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:minnorm_options_orient.gif}} <<HTML(</A></div>)>> * '''Constrained: Normal to cortex''': Only for "surface" grids. At each grid point, we model only one dipole, oriented normally to the cortical surface. This is based on the anatomical observation that in the cortex, the pyramidal neurons are mainly organized in macro-columns that are perpendicular to the cortex surface.<<BR>>Size of the inverse operator: [Nvertices x Nchannels]. * '''Loose''': Only for "surface" grids. As introduced by ('''LINK ?'''), at each point in the surface grid the dipole direction is constrained to be normal to the local cortical surface. Two additional elemental dipoles are also allowed, in the two directions tangential to the cortical surface. As contrasted with "unconstrained," these two tangential elemental dipoles are constrained to have an amplitude that is a fraction of the normal dipole, recommended to be between 0.1 and 0.6. Thus the dipole is only "loosely" constrained to be normal to the local cortical surface.<<BR>>Size of the inverse operator: [3*Nvertices x Nchannel]. * '''Unconstrained''': Either "surface" or "volume" grids. At each grid point, we leave undefined the assumed orientation of the source, such that three "elemental" dipoles are needed to model the source. In Brainstorm, our elemental dipoles are in the x, y, and z ("Cartesian") directions, as compared to other software that may employ polar coordinates. Thus for "N" vertices, we are calculating the estimate for "3*N" elemental dipoles. <<BR>>Size of the inverse operator: [3*Nvertices x Nchannels]. * '''__Recommended option__''': The constrained options use one dipole per grid point instead of three, therefore the source files are smaller, faster to compute and display, and more intuitive to process because we don't have to think about recombining the three values into one. On the other hand, in the cases where its physiological assumptions are not verified, typically when using an MNI template instead of the anatomy of the subject, the normal orientation constraint may fail to represent certain activity patterns. Unconstrained models can help in those cases. See further discussion on constrained vs unconstrained solutions below in section [[http://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Why_does_it_look_so_noisy.3F|Why does it looks so noisy]]. === Sensors === <<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:minnorm_options_sensors.gif}} <<HTML(</A></div>)>> We automatically detect and display the sensors found in your head model. In the example above, only one type of sensors is found ("MEG"). You can select one or all of the sensors found in your model, such as MEG and EEG. However, cross-modality calculations are quite dependent on the accuracy by which you have provided adequate covariance calculations and consistency of the head models across sensor types. As of Spring of 2018, we have also elected to NOT account for cross-covariances between different sensor types, since regularization and stability of cross-modalities is quite involved. For multiple sensor types, the recommendation is that you try each individually and then combined, to test for discordance. == Computing sources for an average == Using the above selections, we now discuss explicit directions on how to compute and visualize. * In Run#01, right-click on the average response for the '''deviant''' stim > '''Compute sources [2018]'''.<<BR>>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Constrained''': Normal to cortex. <<BR>><<BR>> {{attachment:minnorm_single.gif||width="529",height="433"}} * The other menu "Compute sources" launches the interface that was used previously in Brainstorm. We are going to keep maintaining the two implementations in parallel for a while for compatibility and cross-validation purposes. * The result of the computation is displayed as a dependent file of the deviant average because it is related only to this file. In the file comment, "MN" stands for minimum norm and "Constr" stands for "Constrained: normal orientation". <<BR>><<BR>> {{attachment:minnorm_single_tree.gif}} |
Line 58: | Line 95: |
* Right-click on the sources for the deviant average > Cortical activations > '''Display on cortex'''.<<BR>><<BR>> {{attachment:minnorm_single_popup.gif||height="167",width="380"}} * Double-click on the '''recordings '''for the deviant average to have time reference. <<BR>>In the filter tab, add a '''low-pass filter at 100Hz'''.<<BR>><<BR>> {{attachment:display_cortex.gif||height="163",width="482"}} |
* Right-click on the sources for the deviant average > Cortical activations > '''Display on cortex'''.<<BR>><<BR>> {{attachment:minnorm_single_popup.gif||width="380",height="167"}} * Double-click on the '''recordings '''for the deviant average to have a time reference. <<BR>>In the filter tab, add a '''low-pass filter at 40Hz'''.<<BR>><<BR>> {{attachment:display_cortex.gif||width="482",height="163"}} |
Line 61: | Line 98: |
* You can edit many display properties from the Surface tab: * '''Amplitude''': Only the sources that have a value superior than a given percentage of the colorbar maximum are displayed. * '''Min. size''': Hide all the small activated regions, ie. the connected color patches that contain a number of vertices smaller than this "min size" value. * '''Transparency''': Change the transparency of the sources on the cortex. |
* You can edit the display properties in the Surface tab: * '''Amplitude''': Only the sources that have a value superior to a given percentage of the colorbar maximum are displayed. * '''Min size''': Hide all the small activated regions, ie. the connected color patches that contain a number of vertices smaller than this "min size" value. * '''Transparency''': Change the transparency of the source activity on the cortex surface. |
Line 66: | Line 103: |
* The colorbar maximum depends on the way you configured your ''Sources ''colormap. If the option "Maximum: Global" is selected, the maximum should be around 30 pA.m. This value is a rough estimate of the maximum amplitude, sometimes you may have to redefine it manually. * On the screen capture below, the threshold value is set to 90%. It means that only the sources that have a value over 0.90*30 = 27 pA.m are visible. <<BR>>The threshold level is indicated in the colorbar with a horizontal white line. * At the first response peak (91ms), the sources with high amplitudes are located around the primary somatosensory cortex, which is what we are expecting for an auditory stimulation. <<BR>><<BR>> {{attachment:display_sliders.gif||height="211",width="449"}} |
* The colorbar maximum depends on the way you configured your ''Sources ''colormap. If the option "Maximum: Global" is selected, the maximum should be around 150 pA.m. This value is a rough estimate of the maximum amplitude, and this default value is not always adapted to your figure. To edit the maximum value, use the colormap option "Maximum: Custom". * On the screen capture below, the threshold value is set to 20%. It means that only the sources that have a value over 0.20*150 = 30 pA.m are visible. <<BR>>The threshold level is indicated in the colorbar with a horizontal white line. * At the first response peak (91ms), the sources with high amplitudes are located around the primary auditory cortex, bilaterally, which is what we are expecting for an auditory stimulation. <<BR>><<BR>> {{attachment:display_sliders.gif||width="509",height="215"}} == Why does it look so noisy? == The source maps look very noisy and '''discontinuous''', they show a lot of disconnected patches. This is due to the '''orientation constraint''' we imposed on the dipoles orientations. Each value on the cortex should be interpreted as a vector, oriented perpendicular to the surface. Because of the brain’s circumvolutions, neighboring sources can have significantly different orientations, which also causes the forward model response to change quickly with position. As a result, the orientation-constrained minimum norm solution can produce solutions that vary rapidly with position on the cortex resulting in the noisy and disjointed appearance. It is therefore important '''not '''to always interpret disconnected colored patches as independent sources. You cannot expect high spatial resolution with this technique (~5-10mm at best). Most of the time, a cluster of disconnected source patches in the same neighborhood that show the same evolution in time can be interpreted as "there is some significant activity around here, but with some uncertainty as to its precise location". To get more continuous maps for visualization or publication purposes, you can either smooth the values explicitly on the surface (process "'''Sources > Spatial smoothing'''") or use '''unconstrained source models'''. For data exploration, orientation-constrained solutions may be a good enough representation of brain activity, mostly because it is fast and efficient. You can often get a better feeling of the underlying brain activity patterns by making '''short interactive movies''': click on the figure, then hold the left or right arrows of your keyboard. Activity patterns will also look sharper when we compute dSPM or sLORETA normalized measures (later in this tutorial). In most of the screen captures in the following sections, the contrast of the figures has been enhanced for illustration purposes. Don't worry if it looks a lot less colorful on your screen. Of course, ultimately statistical analysis of these maps is required to make scientific inferences from your data. |
Line 71: | Line 119: |
* Right-click on the source file > Cortical activations > '''Display on MRI (MRI Viewer)'''. | * Right-click on the sources for the deviant average > Cortical activations > '''Display on MRI (MRI Viewer)'''. |
Line 73: | Line 121: |
* This figure shows the sources computed on the surface surface and re-interpolated in the MRI volume. If you set the amplitude threshold to 0%, you would see the thin layer of cortex in which the dipoles where estimated. <<BR>><<BR>> {{attachment:display_mriviewer.gif||height="356",width="330"}} | * This figure shows the sources computed on the cortical surface and re-interpolated in the MRI volume. If you set the amplitude threshold to 0%, you would see the thin layer of cortex in which the dipoles where estimated. <<BR>><<BR>> {{attachment:display_mriviewer.gif||width="330",height="356"}} |
Line 76: | Line 124: |
* '''MIP Functional''': Same thing but with the layer of functional values. * '''Smooth level''': The sources values can be smoothed after being re-interpolated in the volume. Right-click on the figure to define the size of the smoothing kernel. |
* '''MIP Functional''': Same as for MIP Anatomy, but with the layer of functional values. * '''Smooth level''': The sources values can be smoothed after being re-interpolated in the volume. Right-click on the figure to define the size of the smoothing kernel (in number of slices). |
Line 80: | Line 128: |
* {{attachment:display_smooth.gif||height="356",width="363"}} | * {{attachment:display_smooth.gif||width="363",height="356"}} |
Line 83: | Line 131: |
* Right-click on the source file > Cortical activations > '''Display on MRI (3D)'''. * This view was also introduced in the tutorials about MRI and surface visualization.<<BR>>Right-click and move your mouse to move the slices. <<BR>><<BR>> {{attachment:display_mri3d.gif||height="203",width="405"}} == Sign of constrained minimum norm values == You should pay attention to a property of the current amplitudes that are given by the minimum norm method: they can be positive of negative, and they oscillate around zero. If you display the sources of the surface again, then configure the colormap to show relative values (uncheck the "Absolute values" option), you would see those typical '''stripes of positive and negative values '''around the sulci. Double-click on the colorbar after testing this to reset the colormap. . {{attachment:display_negative.gif||height="173",width="452"}} This pattern is due to the '''orientation constraint''' imposed on the dipoles. On both sides of a sulcus, we have defined dipoles that are very close to each other, but with opposite orientations. If we observe a pattern of activity on one side of a suclus that can be assimilated to an electric dipole (green arrow), the minimum norm model will try to explain it with the dipoles that are available in the head model (red and blue arrows). Because of the dipoles orientations, it translates into positive values (red arrows) on one side of the sulcus and negative on the other side (blue arrows). . {{attachment:minnorm_sketch.gif||height="155",width="467"}} When displaying the cortical maps at one time point, we are usually not interested by the sign of the minimum norm values but rather by their amplitude. This is why we always display them by default with the colormap option "'''absolute values'''" selected. However, we cannot simply discard the sign of these values because we need them for other types of analysis, typically time-frequency decompositions and connectivity analysis. For estimating frequency measures on those source maps, we need to keep the oscillations around zero. == Computing sources for multiple data files == Because the minimum norm model is linear, we can compute an inverse model independently from the recordings and apply it on the recordings when needed. We will now illustrate how to compute a shared inverse model for all the imported epochs. For illustration purpose, we will use this time an '''unconstrained''' source model. * Right-click on the '''head model''' or the '''folder '''for Run#01 > '''Compute sources [2015]'''.<<BR>>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Unconstrained'''<<BR>><<BR>> {{attachment:minnorm_shared_popup.gif||height="305",width="496"}} * Because we did not request to compute and inverse model for a specific block of recordings, it computed a '''shared inverse model'''. If you right-click on this new file, you get a warning message: "Inversion kernel". It does not contain any source map, but only the inverse operator that will allow us to convert the recordings in source maps.<<BR>><<BR>> {{attachment:minnorm_shared_kernel.gif}} * The database explorer now shows one '''source link''' to this inverse model for each block of recordings available in this folder, single trials and averages. These links are not real files saved on the hard drive, but you can use them exactly like the first source file we calculated for the deviant average. If you load a link, Brainstorm loads the corresponding the MEG recordings, loads the inverse kernel and multiply the two on the fly before displaying it. This optimized approach saves a lot of computation time and lot of space on the hard drive.<<BR>><<BR>> {{attachment:minnorm_links.gif||height="197",width="534"}} * Double-click on the new link for the deviant average, to see what '''unconstrained source maps''' look like. The first obvious observation is that the maps look a lot smoother. <<BR>><<BR>> {{attachment:minnorm_unconstr.gif||height="152",width="459"}} * We have to be careful with the visual comparisons of constrained and unconstrained source maps displayed on the cortex surface, because they are very different types of data. In unconstrained source maps, we have '''three dipoles with orthogonal orientations at each cortex location''', therefore we cannot represent at once all the information. To display them as an activity map, Brainstorm computes the '''norm of the vectorial sum of the three orientations at each vertex'''. <<BR>><<BR>> {{attachment:minnorm_unconstr_sketch.gif||height="158",width="476"}} |
* Right-click on the sources for the deviant average > Cortical activations > '''Display on MRI (3D)'''. * This view was also introduced in the tutorials about MRI and surface visualization.<<BR>>Right-click and move your mouse to move the slices (or use the Resect panel of the Surface tab). <<BR>><<BR>> {{attachment:display_mri3d.gif||width="405",height="203"}} == Sign of constrained maps == You should pay attention to the sign of the current amplitudes that are given by the minimum norm method: they can be positive or negative and they oscillate around zero. Display the sources on the surface, set the amplitude threshold to 0%, then configure the colormap to show relative values (uncheck the "Absolute values" option), you will see those typical '''stripes of positive and negative values '''around the sulci. Double-click on the colorbar after testing this to reset the colormap. . {{attachment:display_negative.gif||width="452",height="173"}} This pattern is due to the '''orientation constraint''' imposed on the dipoles. On both sides of a sulcus, we have defined dipoles that are very close to each other, but with opposite orientations. If we have a pattern of activity on one side of a suclus that can be modeled as a current dipole (green arrow), the limited spatial resolution of the minimum norm model will blur this source using the dipoles that are available in the head model (red and blue arrows). Because of the dipoles’ orientations, the minimum norm images produces positive values (red arrows) on one side of the sulcus and negative on the other side (blue arrows). . {{attachment:minnorm_sketch.gif||width="467",height="155"}} When displaying the cortical maps at one time point, we are usually not interested in the sign of the minimum norm values but rather by their amplitude. This is why we always display them by default with the colormap option "'''absolute values'''" selected. However, we cannot simply discard the sign of these values because we need these for other types of analysis, typically time-frequency decompositions and connectivity analysis. For estimating frequency measures on the source maps it is essential that we retain the sign of the time course at each location so that the correct oscillatory frequencies are identified. == Unconstrained orientations == In cases where the orientation constraint imposed on the dipole orientations produces implausible results, it is possible to relax it partially (option "loose constraints") or completely (option "unconstrained"). This produces a vector (3 component) current source at each location which can complicate interpretation, but avoids some of the noisy and discontinuous features in the current map that are often seen in the constrained maps. Unconstrained solutions are particularly appropriate when using the MNI template instead of the subject's anatomy, or when studying deeper or non-cortical brain regions for which the normal to the cortical surface obtained with FreeSurfer or BrainSuite is unlikely to match any physiological reality. In terms of data representation, the option "unconstrained" and "loose constraints" are very similar. Instead of using one dipole at each cortical location, a base of three orthogonal dipoles is used. Here we will only illustrate the fully unconstrained case. * In Run#01, right-click on the average response for the '''deviant''' stim > '''Compute sources [2018]'''.<<BR>>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Unconstrained'''. * Double-click on the new source file for the deviant average, open the time series simultaneously. The two brain maps below represent the same file at 91ms, with different colormap options (absolute values on the left, relative values on the right). Explanations below. <<BR>><<BR>> {{attachment:minnorm_unconstr_all.gif||width="652",height="413"}} * We have to be careful with the visual comparisons of constrained and unconstrained source maps displayed on the cortex surface, because they are very different types of data. In unconstrained source maps, we have '''three dipoles with orthogonal orientations at each cortex location''', therefore we cannot represent all the information at once. To display them as an activity map, Brainstorm computes the '''norm of the vectorial sum of the three orientations at each vertex'''. <<BR>>S = sqrt(Sx^2^ + Sy^2^ + Sz^2^) <<BR>><<BR>> {{attachment:minnorm_unconstr_sketch.gif||width="476",height="158"}} * This explains that we only observe '''positive values''' (no blue values when the colormap is set to display positive and negative values): the norm displayed at each vertex is always positive. The underlying values along each orientation (x,y,z) can be positive or negative and oscillate around zero in time, but we cannot get access to this information with these static cortical maps. * The maps we observe here look a lot '''smoother''' than the constrained sources we computed earlier. This can be explained by the fact that there is no sharp discontinuity in the forward model between two adjacent points of the grid for a vector dipole represented in Cartesian coordinates while the normal to the surface for two nearby points can be very different, resulting in rapidly changing forward models for the constrained case. * '''Delete''' the unconstrained file, we will not explore this option in the introduction tutorials. You can refer to the tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/Epilepsy|EEG and epilepsy]] for an example of analysis using unconstrained sources. |
Line 115: | Line 166: |
* They depend a lot on the SNR of the signal, which may vary a lot between different subjects. * Their amplitude is therefore difficult to interpret directly. * The values tend to be always higher at the superficy of the brain (close to the sensors). |
* They depend a lot on the SNR of the signal, which may vary significantly between subjects. Their amplitude is therefore difficult to interpret directly. * The values tend to be higher at the surface of the brain (close to the sensors). |
Line 120: | Line 170: |
Normalizing the current density maps with respect with a baseline (noise recordings or resting state) can help with all these issues at the same time. Some normalizations can be computed independently from the recordings, and saved as part of the linear source model (dSPM, sLORETA, MNp). An other way of proceeding is to compute a Z-score baseline correction from the current density maps. All the normalizations options do not change your results, they are just different ways at looking at the same minimum norm maps. If you look at the time series associated with one source, it would be exactly the same for all the normalizations, except for a scaling factor. What changes is only the relative weights between the sources, and these weights do not change over time. ==== dSPM, sLORETA, MNp ==== * In Run#01, right-click on the average recordings for the '''deviant''' stim > '''Compute sources [2015]'''.<<BR>>Select sucessively the three normalization options: dSPM, sLORETA, MNp ('''unconstrained''').<<BR>><<BR>> {{attachment:minnorm_normfiles.gif}} * Double-click on all of them to compare them: <<BR>><<BR>> {{attachment:minnorm_normalized.gif||height="264",width="374"}} |
Normalizing the current density maps with respect to a reference level (estimated from noise recordings, pre-stimulus baseline or resting state recordings) can help with all these issues at the same time. In the case of dSPM and sLORETA, the normalizations are computed as part of the inverse routine and based on noise and data covariances, respectively. While dSPM does produce a Z-score map, we also provide an explicit Z-score normalization that offers the user more flexibility in defining a baseline period over which Brainstorm computes the standard deviation for normalization. The normalization options do not change the temporal dynamics of your results when considering a single location but they do alter the relative scaling of each point in the min norm map. If you look at the time series associated with one given source, it will be exactly the same for all normalizations, except for a scaling factor. Only the relative weights change between the sources, and these weights do not change over time. ==== dSPM, sLORETA ==== * In Run#01, right-click on the average recordings for the '''deviant''' stim > '''Compute sources [2018]'''.<<BR>>Select successively the two normalization options: dSPM, sLORETA, ('''constrained''').<<BR>><<BR>> {{attachment:minnorm_normfiles.gif}} * Double-click on all of them to compare them (screen capture at '''143ms'''): <<BR>><<BR>> {{attachment:minnorm_normalized.gif||width="628",height="156"}} |
Line 131: | Line 181: |
* '''dSPM''': Tends to correct that behavior and may give higher values in deeper areas. * '''sLORETA''': produces a very smooth solution where all the potentially activated area of the brain (given to the low spatial resolution of the source localization with MEG/EEG) is shown as connected, regardless of the depth of the sources. * '''MNp''': ??? |
* '''dSPM''': Tends to correct this behavior and may give higher values in deeper areas. The values obtained are unitless and similar to Z-scores, therefore they are easier to interpret. They are by default not scaled with the number of averages. To obtain correctly scaled dSPM values, one has to call the process "Sources > '''Scale averaged dSPM'''", as explained in the advanced section [[https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Averaging_normalized_values|Averaging normalized values]]. * '''sLORETA''': Produces smoother maps where all the potentially activated area of the brain (given to the low spatial resolution of the source localization with MEG/EEG) is shown as connected, regardless of the depth of the sources. The maps are unitless, but unlike dSPM cannot be interpreted as Z-scores so are more difficult to interpret. |
Line 138: | Line 186: |
* The '''Z-transformation''' converts the current density values to a score of deviation from a baseline. For each source separately we define a baseline, compute the average and standard deviation for this segment. Then for every time point we subtract the baseline average and divide by the baseline standard deviation. It tells how much a value deviates from the average at rest (in number of times the standard deviation). * Drag and drop the unconstrained current density maps (the only link) to the Process1 list. * Run process "'''Standardize > Z-score normalization'''", baseline = '''[-100,-2]ms.''' <<BR>>Select the option "'''Use the norm of the three orientations'''", if not it would compute the Z-score normalization separately for each direction and then take the norm of the three Z-scored orientation for display, which doesn't make a lot of sense.<<BR>> The option "dynamic" offers an optimization in the storage of the Z-scored file that can save a lot of disk space in some cases, but that is compatible with the "norm" option for unconstrained sources.<<BR>><<BR>> {{attachment:zscore_process.gif||height="328",width="524"}} * The option "absolute values / norm of the three orientations" caused the call to the intermediate process "'''Sources > Unconstrained to flat map''': norm". This is why the comment of the output file includes the tag "'''norm'''". <<BR>>Note that this process brought down the number of signals in the file from 45,000 (the number of dipoles) to 15,000 (the number of grid points). There is now only one normalized value for each vertex of the cortex surface. {{attachment:zscore_cortex.gif||height="143",width="408"}} |
* The '''Z-transformation''' converts the current density values to a score that represents the number of standard deviations with respect to a baseline period. We define a baseline period in our file (in this case, the pre-stimulus baseline) and compute the average and standard deviation for this segment. Then for every time point we subtract the baseline average and divide by the baseline standard deviation. '''Z = (Data - <<HTML(μ)>>) / <<HTML(σ)>>''' * This measure tells how much a value deviates from the baseline average, in number of times the standard deviation. This is done independently for each source location, so the sources with a low variability during baseline will be more salient in the cortical maps post-stimulus. * In Process1: Select the constrained current density maps (file ''MN: MEG(Constr)''). * Run process "'''Standardize > Baseline normalization'''", '''[-100,-1.7]ms''', '''Z-score transformation''' <<BR>>Do not select "Use absolute values": We want the sign of the current values.<<BR>><<BR>> {{attachment:zscore_process.gif||width="558",height="530"}} * Double-click on the new normalized file to display it on the cortex (file with the "| zscore" tag). <<BR>><<BR>> {{attachment:zscore_cortex.gif||width="657",height="153"}} * You can see that the cortical maps obtained in this way are '''very similar''' to the other normalization approaches, especially with the dSPM maps. * A value of 3 in this figure means: at this vertex, the value is 3 times higher than the standard deviation from zero during the baseline. If the values during the baseline follow a normal distribution '''N(<<HTML(μ)>>,<<HTML(σ<SUP>2</SUP>)>>)''', then the values we computed follow a N(0,1)='''Z distribution'''. We can get a level of significance from this well known distribution, for instance a value Z=1.96 corresponds to a p-value of 0.05. These questions will be discussed in more details in the statistics tutorial. * The Z-normalized source maps are '''not impacted by the''' '''visualization filters'''. If you open simultaneously the time series and all the files you have now (MN, dSPM, sLORETA, Z-score) and modify the options in the Filter tab, all the figures are updated except for the Z-score one. We can filter easily all the linear models (MN, dSPM, sLORETA), but we would lose the interesting properties of the Z-values if we were filtering them (the values would not follow a Z-distribution anymore). |
Line 145: | Line 200: |
==== Delete your experiments ==== * Select all the normalized source maps (everything but the link) and '''delete''' them. <<BR>><<BR>> {{attachment:delete_norm.gif||height="134",width="447"}} |
|
Line 149: | Line 201: |
* Use non-normalized current density maps for: | * Use non-normalized '''current density maps''' for: |
Line 151: | Line 203: |
* Averaging single trials across MEG runs. | * Averaging files across MEG runs. |
Line 153: | Line 205: |
* Use normalized maps (dSPM, sLORETA, MNp, Z-score) for: | * Use '''normalized maps''' (dSPM, sLORETA, Z-score) for: |
Line 155: | Line 207: |
* Exploring visually the ERP/ERF at the source level. * Normalizing the subjects averages before a group analysis. == Average in source space == * First compute the same source model for the the second acquisition run.<<BR>>In Run#02, right-click on the '''head model''' or the '''folder '''> '''Compute sources [2015]'''.<<BR>>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Unconstrained'''<<BR>><<BR>> {{attachment:minnorm_run02.gif||height="227",width="240"}} * Now we have the source maps available for all the trials, we can '''average them in source space''' across runs. This allows us to average MEG recordings that were recorded with different head positions (in this case Run#01 and Run#02 have different channel files so they could potentially have different head positions preventing the direct averaging at the sensor level). * Thanks to the linearity of the minimum norm model, the two following approaches are equivalent: * Averaging the sources of all the individual trials across runs, * Averaging the sources for the sensor averages that we already computed for each run. For non-standardized current density maps: MN(Average(trials)) = Average(MN(trials)) * The second solution is a lot faster because it needs to read 4 files (one file per run and per condition) instead of 480 (total number of trials in the two runs). * Select the average recordings for '''Run01 '''and '''Run02 '''and the ['''Process sources'''] button on the left. * Run process "'''Average > Average files'''": Select "'''By trial group (subject average)'''". <<BR>>The options "trial group" average together the files that have similar comments.<<BR>><<BR>> {{attachment:average_process.gif||height="446",width="449"}} * The two averages that are produced (one for each condition) are saved in the folder '''Intra-subject'''. This is where all the files that were computed using information from multiple folders within the same subject are going to be saved. If you prefer to have them in different folders, you can always move create new folders and move them there. <<BR>><<BR>>{{attachment:average_files.gif}} * Double-click on the source averages to display them (standard=top, deviant=bottom). <<BR>>You can the average recordings of one or the other run for adding a time reference. <<BR>>Use the pre-defined view "Left, Right" for looking at the two sides at once (shortcut: "7").<<BR>><<BR>> {{attachment:average_display.gif}} * Note that opening the source maps can be very long because of the online filters. Check in the Filter tab, you probably still have a '''100Hz low-pass filter''' applied for the visualization. In the case of averaged source maps, the 15000 source signals are filtered on the fly when you load a source file. This can take a significant amount of time. You may consider unchecking this option if the display is too slow on your computer. * Warning: This approach is valid only for the current density maps. In general, it is '''not advised to average normalized source maps '''because the average of the normalized maps is not the same as the normalization of the average. '''First average, then normalize'''. This issue is described in an advanced section below. |
* Exploring visually the average response (ERP/ERF) at the source level. * Normalizing the subject averages before a group analysis. * Avoid averaging normalized maps (or computing any additional statistics) * Recommended normalization approach: * It is difficult to declare that one normalization technique is better than another. They have different advantages and may be used in different cases. Ideally, they should all converge to similar observations and inferences. If you obtain results with one method that you cannot reproduce with the others, you should question your findings. * dSPM and sLORETA are linear measures and can expressed as imaging kernels, therefore they are easier to manipulate in Brainstorm. sLORETA maps can be smoother but they are difficult to interpret. dSPMs, as z-score maps, are much easier to understand and interpret. * Z-normalized current density maps are also easy to interpret. They represent explicitly a "deviation from experimental baseline" as defined by the user. In contrast, dSPM indicates the deviation from the data that was used to define the noise covariance used in computing the min norm map. == Delete your experiments == * Select all the source files you computed until now and '''delete''' them. <<BR>><<BR>> {{attachment:delete_norm.gif||width="474",height="172"}} == Computing sources for single trials == Because the minimum norm model is linear, we can compute an inverse model independently from the recordings and apply it on the recordings when needed. We will now illustrate how to compute a shared inverse model for all the imported epochs. * Right-click on the '''head model''' or the '''folder '''for Run#01 > '''Compute sources [2018]'''.<<BR>>Select: '''Minimum norm''' imaging, '''Current density''' map, '''Constrained''': Normal to cortex<<BR>><<BR>> {{attachment:minnorm_shared_popup.gif||width="500",height="273"}} * Because we did not request to compute an inverse model for a specific block of recordings, it computed a '''shared inverse model'''. If you right-click on this new file, you get a warning message: "Inversion kernel". It does not contain any source map, but only the inverse operator that will allow us to convert the recordings into source maps.<<BR>><<BR>> {{attachment:minnorm_shared_kernel.gif}} * The database explorer now shows one '''source link''' to this inverse model for each block of recordings available in this folder, single trials and averages. These links are not real files saved on the hard drive, but you can use them exactly like the previous source files we calculated for the deviant average. If you load a link, Brainstorm loads the corresponding MEG recordings, loads the inverse kernel and multiplies the two on the fly before displaying it. This optimized approach saves a lot of computation time and lot of space on the hard drive.<<BR>><<BR>> {{attachment:minnorm_links.gif||width="513",height="176"}} == Averaging in source space == ==== Computing the average ==== * First compute the same source model for the the second acquisition run.<<BR>>In Run#02, right-click on the '''head model''' or the '''folder '''> '''Compute sources [2018]'''.<<BR>>Select: '''Minimum norm''' imaging, '''Current density''' map, '''Constrained''': Normal to cortex<<BR>><<BR>> {{attachment:minnorm_run02.gif||width="245",height="242"}} * Now we have the source maps available for all the recordings, we can '''average them in source space''' across runs. This allows us to average MEG recordings that were recorded with different head positions (in this case Run#01 and Run#02 have different channel files so they could potentially have different head positions preventing the direct averaging at the sensor level). * Thanks to the linearity of the minimum norm model: MN(Average(trials)) = Average(MN(trials))<<BR>>The two following approaches are equivalent: 1. Averaging the sources of all the individual trials across runs, 1. Averaging the sources for the sensor averages that we already computed for each run. * We will use the second option: using the sources for the sensor-level averages. It is a lot faster because it needs to read 4 files (one average file per run and per condition) instead of 456 files (total number of good trials in the two runs). * Drag and drop to the Process1 tab the average recordings for '''Run01 '''and '''Run02''', then press the ['''Process sources'''] button on the left to select the source files instead of the MEG recordings. * Run process "'''Average > Average files'''": <<BR>>Select "'''By trial group (subject average)'''" to average together files with similar names. <<BR>>Select "'''Arithmetic average'''" function. <<BR>>Check "'''Weighted average'''" to account for the different numbers of trials in both runs.<<BR>><<BR>> {{attachment:average_process.gif||width="526",height="565"}} * The two averages that are produced (one for each condition) are saved in the folder '''Intra-subject'''. This is where all the files computed using information from multiple folders within the same subject are saved. If you prefer to have them somewhere else, you can create new folders and move them there, just like you would do with a regular file explorer. <<BR>><<BR>> {{attachment:average_files.gif||width="268",height="159"}} * The file comments say "2 files" because they were computed from two averages each (one from each run), but the number of corresponding trials is correctly updated in the file structure. <<BR>>Right-click on each of them > File > View file contents, and check the '''Leff''' field: <<BR>>78 trials for the deviant condition, 378 trials for the standard condition. * Double-click on the source averages to display them (deviant=top, standard=bottom). <<BR>>Open the sensor-level averages as a time reference. <<BR>>Use the predefined view "Left, Right" for looking at the two sides at the same time (shortcut: "7").<<BR>><<BR>> {{attachment:average_display_mn.gif||width="530",height="296"}} ==== Visualization filters ==== * Note that opening the source maps can be very long because of the filters for visualization. Check in the '''Filter''' '''tab''', you may have a''' '''filter applied with the option "'''Filter all results'''" selected. In the case of averaged source maps, the 15,000 source signals are filtered on the fly when you load a source file. This filtering of the full source files can take a significant amount of time, consider unchecking this option if the display is too slow on your computer. <<BR>><<BR>> {{attachment:filter_sources.gif||width="207",height="163"}} * It was not a problem until now because the source files were saved in the compact form (Kernel*recordings) and the visualization filters were applied on the recordings, then projected to the source space. This fast option is not available anymore with these averages across runs. * The visualization filters will not be available anymore after we apply a Z-score normalization. If we want to display Z-score source maps that are smoothed in time, we will have to apply explicitly the filters on the file, with the Process1 tab. ==== Low-pass filter ==== * Clear the Process1 list, then drag and drop the new averages in it. * Run process "'''Pre-process > Band-pass filter'''": [0,40] Hz<<BR>><<BR>> {{attachment:average_filter.gif||width="476",height="338"}} * '''Epochs are too short''': Look at the filter response, the expected transient duration is at least 78ms. The first and last 78ms of the average should be discarded after filtering. However, doing this would get rid of almost all the 100ms baseline, which we need for normalization. As mentioned [[http://neuroimage.usc.edu/brainstorm/Tutorials/Epoching#Epoch_length|here]], we should have been importing longer epochs in order to filter and normalize the averages properly. <<BR>><<BR>> {{attachment:average_filter2.gif||width="378",height="323"}} ==== Z-score normalization ==== * In Process1, select the two filtered averages. * Run process "'''Standardize > Baseline normalization'''", baseline='''[-100,-1.7]ms, ''''''Z-score.'''<<BR>><<BR>> {{attachment:average_zscore.gif||width="539",height="292"}} * Four new files are accessible in the database: two filtered and two filtered+normalized. <<BR>><<BR>> {{attachment:average_zscore_files.gif||width="341",height="205"}} * Double-click on the source averages to display them (deviant=top, standard=bottom).<<BR>><<BR>> {{attachment:average_display_zscore.gif||width="530",height="295"}} * The Z-score source values at 90ms are higher for the standard condition (~25) than for the deviant condition (~15). We observe this because the two conditions have very different signal-to-noise ratios. The standard condition has about 5x more trials, therefore the standard deviation over the baseline is a lot lower, leading to higher Z-score. * '''Delete''' the non-normalized filtered files, we will not use them in the following tutorials. <<BR>><<BR>> {{attachment:average_zscore_files2.gif||width="294",height="147"}} == Note for beginners == Everything below is advanced documentation, you can skip it for now. <<EmbedContent("http://neuroimage.usc.edu/bst/get_prevnext.php?skip=Tutorials/Scouts")>> |
Line 174: | Line 265: |
== Averaging normalized values == Averaging normalized source maps within a single subject requires more attention than averaging current density maps. Since averaging reduces variance, the resulting source maps will have a different statistical distribution than the nominal distribution of the individual maps. For example, averaging z-score normalized maps will result in maps with variance less than 1. The same holds true for dSPM maps. Assuming independent samples, the variance of an average of N maps drops by 1/N. For this reason, it is generally recommended to select the "Weighted average" option in the ‘Average files’ process when averaging trials or source maps (which performs mean(x) = (N1*sum(x1(i)) + N2*sum(x2(i)) + …)/ (N1+N2+…) ) in order to keep track of the number of samples and the actual variance of averaged statistical maps. '''dSPM''' * An averaged dSPM map has variance equal to 1/N (and thus standard deviation equal to 1/sqrt(N)). Therefore one could multiply the averaged dSPM map by sqrt(N) in order to maintain variance 1 under the null hypothesis. In previous versions of Brainstorm, this was done automatically when visualizing the files, and when averaging source maps with the option "Adjust normalized source maps for SNR increase". To simplify the interface and make the interpretation of maps more intuitive and consistent with other cases (min-norm, z-scored), we now dropped this option. Thus averaging dSPM maps now results in maps with variance less than 1, and is consistent with handling min-norm, z-scored and sLORETA maps. * Ajusting an averaged dSPM file by this sqrt(N) factor is still possible manually, eg. in order to visualize cortical maps that can be interpreted as Z values. Select the average dSPM files in Process1 and run process "Sources > '''Scale averaged dSPM'''". This should be used only for visualization and interpretation, scaled dSPM should never be averaged or used for any other statistical analysis. <<BR>><<BR>> {{attachment:dspm_scale.gif||width="547",height="307"}} '''Z-score''' * The same SNR issues arise while averaging Z-scores: the average of the Z-scores is lower than the Z-score of the average. * When computing averages at the subject level: Always '''avoid averaging Z-score maps'''. <<BR>>Average the current density maps, then normalize. '''sLORETA''' * This normalization is not based on the SNR of signal, but rather on the spatial smoothness of the maps. Managing these maps is similar to min-norm maps: the variance of the individual maps is not explicitly modeled or known analytically. * As in other cases, sLORETA(Average(trials)) = Average(sLORETA(trials)), and this relationship is guaranteed to hold with averaging uneven number of samples when using the option "Weighted average". <<TAG(Advanced)>> |
|
Line 175: | Line 289: |
* '''Standard:''' (Right-click on the 3D figures > Snapshot > Time contact sheet) . Explain qhat to do to make nice contact sheets this way * '''Deviant:''' . {{http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory?action=AttachFile&do=get&target=average_source_deviant_left.gif|average_source_deviant_left.gif|height="263",width="486",class="attachment"}} * Contact sheets: in time or in space, for each orientation. You can try all the menus. Example: Right-click on the figure > Snapshot > Volume contact sheet: axial: <<BR>><<BR>> {{attachment:popupSnapshot.gif}} {{attachment:contactAxial.gif||height="288px",width="322px"}} * '''Movies''': Right-click on any figure > Snapshot > '''Movie (time): All figures''' (click to download video) [[http://neuroimage.usc.edu/wikidocs/average_sources.avi|{{http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory?action=AttachFile&do=get&target=average_source_video.gif|http://neuroimage.usc.edu/brainstorm/wikidocs/average_sources.avi|height="258",width="484",class="attachment"}}]] |
A good way to represent what is happening in time is to generate contact sheets or videos. Right-click on any figure and go to the menu Snapshot to check out all the possible options. For a nicer result, take some time to adjust the size of the figure, the amplitude threshold and the colormap options (hiding the colorbar can be a good option for contact sheets). A time stamp is added to the captured figure. The size of the text font is fixed, so if you want it to be readable in the contact sheet, you should make you figure very small before starting the capture. The screen captures below where produced with the colormap "hot". * '''Contact sheet:''' Right-click on any figure > Snapshot > '''Time contact sheet: Figure'''<<BR>><<BR>> {{attachment:contact_options.gif||width="128",height="209"}} {{attachment:contact_image.gif||width="405",height="249"}} * '''Movies''': Right-click on any figure > Snapshot > '''Movie (time): All figures'''<<BR>><<BR>> {{attachment:video_options.gif||width="126",height="263"}} [[http://neuroimage.usc.edu/wikidocs/average_sources_unconstr.avi|{{attachment:video_vlc.gif|http://neuroimage.usc.edu/wikidocs/average_sources_unconstr.avi|width="409",height="262"}}]] |
Line 184: | Line 298: |
== Advanced options == Let's introduce briefly the other options offered for the source estimation. Right-click again on Right / ERF > Compute sources. Click on "Expert mode", you see more options appearing in the window. If you click on Run, you have access to all the options of the wMNE algorithm. {{attachment:panelExpert.gif}} {{attachment:panelOptions.gif}} === Beamformer options === === Dipole fitting options === |
== Model evaluation == One way to evaluate the accuracy of the source reconstruction if to simulate recordings using the estimated source maps. This is done simply by multiplying the source time series with the forward model:<<BR>>MEG_simulated [Nmeg x Ntime] = Forward_model [Nmeg x Nsources] * MN_sources [Nsources x Ntime]<<BR>>Then you can compare visually the original MEG recordings with the simulated ones. More formally, you can compute an error measure from the residuals (recordings - simulated). To simulate MEG recordings from a minimum norm source model, right-click on the source file, then select the menu "'''Model evaluation > Simulate recordings'''". . {{attachment:model_popup.gif||width="597",height="151"}} Open side-by-side the original and simulated MEG recordings for the same condition: . {{attachment:model_results.gif||width="332",height="218"}} <<TAG(Advanced)>> == Advanced options: Minimum norm == Right-click on the '''deviant average''' for '''Run#01''' > '''Compute sources [2018]'''. <<BR>>Click on the button ['''Show details'''] to bring up all the advanced minimum norm options. <<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:minnorm_details.gif||width="390",height="392"}} <<HTML(</A></div>)>> === Depth weighting === Briefly, the use of various depth weightings was far more debated in the 1990s, before the introduction of MNE normalization via dSPM, sLORETA, and other "z-scoring" methods, which mostly cancel the effects of depth weighting (put another way, after normalization min norm results tend to look quite similar whether depth weighting is used or not). By modifying the source covariance model at each point in the source grid, deeper sources are "boosted" to increase their signal strength relative to the shallower dipoles; otherwise, the resulting MNE current density maps are too dominated by the shallower sources. If using dSPM or sLORETA, little difference in using depth weighting should be noted. To understand how to set these parameters, please refer to the [[http://www.nmr.mgh.harvard.edu/meg/manuals/MNE-manual-2.7.pdf|MNE manual]]. (options --depth, --weightexp and --weightlimit). === Noise covariance regularization [TODO] === MNE and dipole modeling are best done with an accurate model of the noise covariance, which is generally computed from experimental data. As such, these estimates are themselves prone to errors that arise from relatively too few data points, weak sensors, and strange data dependencies that can cause the eigenspectrum of the covariance matrix to be illconditioned (i.e. a large eigenvalue spread or matrix condition number). In Brainstorm, we provide several means to "stabilize" or "regularize" the noise covariance matrix, so that source estimation calculations are more robust to small errors. * '''Regularize noise covariance''': The L2 matrix norm is defined as the largest eigenvalue of its eigenspectrum. This option adds to the covariance matrix a diagonal matrix whos entries are a fraction of the matrix norm. The default is 0.1, such that covariance matrix is stabilized by adding to it an identity matrix that is scaled to 10% of the largest eigenvalue. * '''Median eigenvalue''': The eigenspectrum of MEG data can often span many decades, due to highly colored spatial noise, but this broad spectrum is generally confined to the first several modes only. Thus the L2 norm is many times greater than the majority of the eigenvalues, and it is difficult to prescribe a conventional regularization parameter. Instability in the inverse is dominated by defects found in the smallest eigenvalues. This approach stabilizes the eigenspectrum by replicating the median (middle) eigenvalue for the remainder of the small eigenvalues. * '''Diagonal noise covariance''': Deficiencies in the eigenspectrum often arise from numerical inter-dependencies found among the channels, particularly in covariance matrices computed from relatively short sequences of data. One common method of stabilization is to simply take the diagonal of the covariance matrix and zero-out the cross-covariances. Each channel is therefore modeled as independent of the other channels. The eigenspectrum is now simply the (sorted) diagonal values. * '''No covariance regularization''': We simply use the noise covariance matrix as computed or provided by the user. * '''Automatic shrinkage''': Stabilization method of Ledoit and Wolf (2004), still under testing in the Brainstorm environment. Basically tries to estimate a good tradeoff between no regularization and diagonal regularization, using a "shrinkage" factor. See Brainstorm code "bst_inverse_linear_2018.m" for notes and details. * '''Recommended option''': This author (Mosher) votes for the '''median eigenvalue '''as being generally effective. The other options are useful for comparing with other software packages that generally employ similar regularization methods. '''[TODO]''' === Regularization parameter [TODO] === In minimum norm estimates, as mentioned above in the comparisons among methods, the data covariance matrix is essentially synthesized by adding the noise covariance matrix to a modeled signal covariance matrix. The signal covariance matrix is generated by passing the source prior through the forward model. The source prior is in turn prescribed by the source model orientation and the depth weighting. A final regularization parameter, however, determines how much weight the signal model should be given relative to the noise model, i.e. the "signal to noise ratio" (SNR). In Brainstorm, we follow the definition of SNR as first defined in the original MNE software of Hamalainen. The signal covariance matrix is "whitened" by the noise covariance matrix, such that the whitened eigenspectrum has elements in terms of SNR (power). We find the mean of this spectrum, then take the square root to yield the average SNR (amplitude). The default in MNE and in Brainstorm is "3", i.e. the average SNR (power) is 9. * '''Signal-to-noise ratio''': Use SNR of 3 as the classic recommendation, as discussed above. * '''RMS source amplitude''': An alternative definition of SNR, but still under test and may be dropped. '''[TODO]''' |
Line 192: | Line 340: |
* '''Full results''': Saves in one big matrix the values of all the sources (15000) for all the time samples (375). The size in memory of such a matrix is about 45Mb for 300ms of recordings. This is still reasonable, so you may use this option in this case. But if you need to process longer recordings, you may have some "Out of memory" errors in Matlab, or fill your hard drive quickly. * '''Kernel only''': Saves only the ''Inversion kernel'', a matrix that describes how to compute the sources when you know the values at the sensor level. So its size is: number of sources (15000) x number of sensors (151). This is possible because these minimum norm methods are linear methods. * To get the sources time series, you just need to multiply this kernel by the MEG recordings. * '''Full results '''=''' Inversion kernel '''*''' Recordings''' * The size of this matrix is about 18Mb. In this case, the difference is not very important because we only process 375 time samples. But this inversion kernel is independent from the recordings length, so you can easily scale its computation to much longer recordings. * '''Default ?''' * Probably "''Kernel only''", as it is faster and produces smaller files. * All the following operations in Brainstorm will be exactly the same either way. Each time you access the sources values, the program has to do the multiplication Kernel * Recordings, but this is done in a totally transparent way. * The only reason that would make you choose the "''Full results''" options would be any interest in having the full matrix in one file, in case you want to process the sources values by yourself (filtering, statistics, display...). === Signal properties === * '''Signal to noise ratio (SNR)''': An estimate of the amplitude SNR of the recordings, as defined in MNE (--snr option in MNE), used to compute the regularization parameter (lambda^2 = 1/SNR^2). The default value is SNR = 3. Automatic selection of the regularization parameter is currently not supported. * '''PCA Whitening''': Parameter introduced by Rey Ramirez. For more information, see the code of bst_wmne function. === Noise covariance matrix === * '''Full noise covariance''': Use the full noise covariance matrix available in Brainstorm database. If the noise covariance file previously computed in is a diagonal matrix (as it is the case in this tutorial), this value is ignored, and the "diagonal noise covariance" option is used instead. * '''Diagonal noise covariance''': Discard the off-diagonal elements of the noise covariance matrix (assuming heteroscedastic uncorrelated noise). Corresponds in MNE to the --diagnoise option. * '''Regularize noise covariance''': Regularize the noise-covariance matrix by the given amount for each type of sensor individually (value is restricted to the range 0...1). For more information, please refer to the MNE manual, section 6.2.4 (options --magreg, --gradreg and --eegreg). === Depth weighting === The minimum-norm estimates have a bias towards superficial currents. This tendency can be alleviated by adjusting these parameters. To understand how to set these parameters, please refer to the MNE manual (options --depth, --weightexp and --weightlimit). |
As mentioned above, these methods create a convenient linear imaging kernel that is "tall" in the number of elemental dipoles (one or three per grid point) and "wide" only in the number of sensors. At subsequent visualization time, we efficiently multiply the kernel with the data matrix to compute the min norm images. For some custom purposes, however, a user may find it convenient to pre-multiply the data matrix and generate the full source estimation matrix. This would only be recommended in small data sets, since the full results can become quite large. * '''Kernel only''': Saves only the linear inverse operator, a model that converts sensor values into source values. The size of this matrix is: number of sources (15000) x number of MEG sensors (274). The multiplication with the recordings is done on the fly by Brainstorm in a transparent way. For long recordings or numerous epochs, this form of compact storage helps saving a lot of disk space and computation time, and it speeds up significantly the display. Always select this option when possible. * '''Full results''': Saves in one big matrix the values of all the sources (15,000) for all the time samples (361). The size in memory of such a matrix is about 45Mb for 600ms of recordings. This is still reasonable, so you may use this option in this case. But if you need to process longer recordings, you may face "Out of memory" errors in Matlab, or fill your hard drive quickly. * Full results [15000x361] = Inverse kernel [15000x274] * Recordings [274x361] == Advanced options: LCMV beamformer == As mentioned in the introduction above, two other methods can be selected for source estimation, a beamformer and dipole modeling. In this section, we review the options for the beamformer. On top of the noise covariance matrix, you need to estimate a [[http://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance#Data_covariance|data covariance matrix]] in order to enable the option "LCMV beamformer" in the interface. <<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:lcmv_options.gif||width="390",height="350"}} <<HTML(</A></div>)>> '''Measure''' The only option "Pseudo Neural Activity Index" (PNAI), is named after the definition of the Neural Activity Index (NAI). We have modified Van Veen’s definition to rely strictly on the data covariance, without need for a separate noise covariance matrix, but the basic premise is the same as in dSPM, sLORETA, and other normalizations. Viewing the resulting "map," in an identical manner to that with MNE, dSPM, and sLORETA described above, reveals possibly multiple sources as peaks in the map. The PNAI scores analogously to z-scoring. '''Dipole orientations''' We recommend you choose "'''unconstrained'''" and let the later [[Tutorials/TutDipScan|Dipole scanning]] process, which finds the best fitting dipole at each time point, optimize the orientation with respect to the data. '''Data covariance regularization''' Same definitions as in MNE, only applied to the data covariance matrix, rather than the noise covariance matrix. Our recommendation is to use '''median eigenvalue'''. == Advanced options: Dipole modeling == Dipole modeling fits a single dipole at each potential source location to produce a dipole scanning map. This map can be viewed as a indication of how well, and where, the dipole fits at each time point. However, we recommend using the subsequent best-dipole fitting routine ([[Tutorials/TutDipScan|dipole scanning]]) to determine the final location and orientation of the dipole (one per time point). Please note that '''this function does not fit multiple simultaneous dipoles'''. Although not widely recognized, dipole modeling and beamforming are more alike than they are different – when comparing the inverse operators required to compute the dipole scanning map (dipole modeling) and the beamformer output map (LCMV), we see that they differ only in that the former uses an inverse noise covariance matrix while the latter replaces this with the inverse of the data covariance. <<HTML(<div style="padding: 0px 0px 0px 10px; float: right;">)>> {{attachment:dipoles_options.gif||width="390",height="314"}} <<HTML(</A></div>)>> '''Measure''' This field is now missing, but the resulting imaging kernel file is directly analogous to the PNAI result from LCMV beamforming. The user can display this scanning measure just as with the LCMV case, where again the normalization and units are a form of z-scoring. '''Dipole orientations''' Use "unconstrained source" modeling and let the process "[[Tutorials/TutDipScan|dipole scanning]]" optimize the orientation of the dipole for every time instance. '''Noise covariance regularization''' Similarly, use "median eigenvalue". <<HTML(<div style="clear: right;"></div>)>> The tutorial "[[http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta#Dipole_source_estimation|MEG current phantom (Elekta)]]" demonstrates dipole modeling of 32 individual dipoles under realistic experimental noise conditions. |
Line 216: | Line 392: |
== Equations == TODO: John <<TAG(Advanced)>> == Rey on sLORETA == Yes in sLORETA the noise covariance is not used at all for the standardization process. It can be used modeling correlated noise and whitening, but that is optional. I have noticed that a lot of folks are confused about this and I have seen many statements in papers spreading this awful confusion. The sLORETA is standardized by the resolution matrix (diagonal for dipole orientations constraints, or block diagonals for free orientations). That is why sLORETA has zero localization bias for ALL point-spread functions, and why I always prefer sLORETA over dSPM, MNE, or any beamformer. This is all in the math .... but ..... just so that you know Fas Hsuan Lin's paper comparing sLORETA with dSPM, and MNE has a big mistake, the assumed source covariance matrix is not the identity matrix, and that violates the beauty of the math and results in non-zero localization bias. That's why in Brainstorm the prior source covariance matrix used for sLORETA automatically uses no depth bias compensation (identity matrix). sLORETA accomplishes depth bias compensation via the resolution matrix, NOT via the prior source covariance matrix. Trying to use a depth exponent of 0.7 or 0.8 like we do for MNE and dSPM will mess up sLORETA.You will not find this in a paper, but I checked it all out many years ago. This is critical. '''Explain sLORETA units (see email exchanges from Feb 2015) ''' <<TAG(Advanced)>> == Issues with dSPM average == Average(dSPM) is NOT equal to dSPM(Average). {{{ There is no problem for the MNE and sLORETA solutions, because the scaling of the noise covariance matrix doesn't impact the results. wMNE(Data, NoiseCov) = wMNE(Data, NoiseCov / N) So when we average we get: Average(wMNE(Trials, NoiseCov)) = wMNE(Average, NoiseCov) = wMNE(Average, NoiseCov / N) But for dSPM we have: dSPM(Data, NoiseCov) = dSPM(Data, NoiseCov / N) ./ sqrt(N) So when we average we get: Average(dSPM(Trials, NoiseCov)) = dSPM(Average, NoiseCov) = dSPM(Average, NoiseCov / N) ./ sqrt(N) }}} Rey: "Basically, the dSPM value at each location is equal to the wMNE value divided by the projection of the estimated noise covariance matrix onto each source point. After whitening, the operational noise covariance matrix is by definition the identity matrix, and hence the projection of the noise is equal to the L2 norm of the row vector of the wMNE inverse operator (in the case of fixed dipole orientations). So, dSPM is what you get when the rows of the wMNE inverse operator all have unit norm (i.e., they all point in different directions but lie in a unit hyper-sphere)." Rey: "dSPM is really a source mapping of SNR, not of activity. Hence, it's not all so surprising that the single trial SNR maps are smaller... "Rey: "Perhaps, dSPM should be used only for averaged data (i.e., ERF, ERP), at least until it's all figure out. In a way, dSPM is just MNE followed by the noise normalization. Thus, you could do all the single trial processing with the MNE algorithm, and only do the noise normalization when needed (e.g., after averaging or on single trials only if they are not going to be averaged)." |
|
Line 253: | Line 393: |
Document file tags Document file structure Differences for kernel vs. sources Differences for constrained vs. unconstrained in_bst_results to get the full sources or apply any process * You can have a look to the corresponding matrix file (right-click > File > View file contents). You would find all the options of forward and inverse modeling, and only one interesting field : '''ImagingKernel''', which contains the inversion kernel. It is a [nVertices x nChannels] matrix that has to be multiplied with the recordings matrix in order to get the activity for each source at all the time samples. * The minimum norm solution being a linear operation (the time series for each source is a linear combination of all the time series recorded by the sensors), we make this economy of saving only this linear operator instead of the full source matrix (nVertices x nTime) Links: These links are not saved as files but as specific strings in the database: "link|kernel_file|data_file". This means that to represent them, one should load the shared kernel, load the recordings, and multiply them. == References == * Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E<<BR>>[[http://www.ncbi.nlm.nih.gov/pubmed/10798392|Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity]]. Neuron 2000 Apr, 26(1):55-67 * Pascual-Marqui RD, [[http://www.ncbi.nlm.nih.gov/pubmed/12575463|Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details]], Methods Find Exp Clin Pharmacol 2002, 24 Suppl D:5-12 == Additional discussions on the forum == |
==== Constrained shared kernel ==== Right-click on a shared inverse file in the database explorer > File > '''View file contents'''. . {{attachment:kernel_contents.gif||width="669",height="342"}} ==== Structure of the source files: results_*.mat ==== Mandatory fields: * '''ImagingKernel''': [Nsources x Nchannels] Linear inverse operator that must be multiplied by the recordings in order to get the full source time series. If defined, ImageGridAmp must be empty. * '''ImageGridAmp''': [Nsources x Ntime] Full source time series, in Amper.meter. If this field is defined, ImagingKernel must be empty. If you want this field to be set instead of ImagingKernel, make sure you select the advanced option ''Full results'' when estimating the sources. * '''Time''': [1 x Ntime] Time values for each sample recorded in F, in seconds. * '''nComponents''': Number of dipoles per grid point: 1=Constrained, 3=Unconstrained, 0=Mixed. In the case of mixed head models, the atlas GridAtlas documents region by region how the list of grid points matches the list of dipoles. * '''Function''': Type of values currently saved in the file: 'mn', 'mnp', 'dspm', 'dspm2018', 'dspm2018sc', 'sloreta', 'lcmv', 'lcmvp', 'lcmvnai', 'lcmvpow', 'gls', 'glsp', 'glsfit', 'glschi', 'zscore', 'ersd'... * '''HeadModelType''': Type of source space used for this head model ('surface', 'volume', 'mixed'). * '''HeadModelFile''': Relative path to the head model used to compute the sources. * '''SurfaceFile''': Relative path to the cortex surface file related with this head model. * '''Atlas''': Used only by the process "Sources > Downsample to atlas". * '''GridLoc''': [Nvertices x 3], (x,y,z) positions of the grid of source points. In the case of a surface head model, it is empty and you read directly the positions from the surface file. * '''GridOrient''': [Nvertices x 3], direction of the normal to the surface for each vertex point (copy of the 'VertNormals' matrix of the cortex surface). Empty in the case of a volume head model or unconstrained sources. * '''GridAtlas''': Atlas "Source model" used with mixed source models. * '''GoodChannel''': [1 x Nchannels] Array of channel indices used to estimate the sources. * '''DataFile''': Relative path to the recordings file for which the sources where computed. If this field is set, the source file appears as a dependent of the DataFile. * '''Comment''': String displayed in the database explorer to represent this file. * '''History''': Operations performed on the file since it was create (menu "View file history"). Optional fields: * '''Options''': Structure that contains all the options of the inverse calculation. This is saved in the file only for bookkeeping. * '''Whitener''': Noise covariance whitener computed in bst_inverse_linear_2018.m * '''DataWhitener''': Data covariance whitener computed in bst_inverse_linear_2018.m * '''SourceDecompVa''': [3 x Nsources] Concatenated right singular vectors from the SVD decomposition of the whitened leadfield for each source (only for unconstrained sources). * '''SourceDecompSa''': [3 x Nvertices] Vector diagonal of the singular values from the SVD decomposition of the whitened leadfield for each source (only for unconstrained sources). * '''Std''': For averaged files, number of trials that were used to compute this file. * '''DisplayUnits''': String, force the display of this file using a specific type of units. * '''ChannelFlag''': [Nchannels x 1] Copy of the ChannelFlag field from the original data file. * '''Leff''': Effective number of averages. For averaged files, number of trials that were used to compute this file. For source files that are attached to a data file, we use the Leff field from the data file. ==== Full source maps ==== In ''Intra-subject'', right-click on one of the normalized averages > File > '''View file contents'''. . {{attachment:zscore_contents.gif||width="426",height="377"}} This file has the same structure as a shared inverse kernel, with the following differences: * It contains the full time series (ImageGridAmp) instead of an inverse operator (ImagingKernel). * The Z-score process updated the field Function ('mn' => 'zscore') ==== Source links ==== * The links are not real files on the hard drive, if you select the menu "View file contents" for any of them it would display the structure of the corresponding shared kernel. * They are saved in the database as strings with a specific structure: "'''link|kernel_file|data_file'''". This string associates a shared inverse operator with some recordings. The two files have to be available to load the this file. All the functions in Brainstorm are equipped to reconstruct the full source matrix dynamically. ==== Filename tags ==== * '''_KERNEL_''': Indicates that the file contains only an inverse kernel, it needs to be associated with recordings to be opened. ==== Useful functions ==== * '''in_bst_results'''(ResultsFile, LoadFull, FieldsList): Load a source file and optionally reconstruct the full source time series on the fly (ImagingKernel * recordings). * '''in_bst'''(FileName, TimeBounds, LoadFull): Load any Brainstorm data file with the possibility to load only a specific part of the file. * '''bst_process'''(''''LoadInputFile'''', FileName, Target, TimeWindow, OPTIONS): The most high-level function for reading data files, can compute scout values on the fly. == Additional documentation == ==== Articles ==== * '''Minimum norm''': Baillet S, Mosher JC, Leahy RM<<BR>>[[http://neuroimage.usc.edu/paperspdf/BailletMosherLeahy_IEEESPMAG_Nov2001.pdf|Electromagnetic brain mapping]], IEEE SP MAG 2001. * '''dSPM''': Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E<<BR>>[[http://www.ncbi.nlm.nih.gov/pubmed/10798392|Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity]]. Neuron 2000 Apr, 26(1):55-67 * '''sLORETA''': Pascual-Marqui RD<<BR>>[[http://www.ncbi.nlm.nih.gov/pubmed/12575463|Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details]], Methods Find Exp Clin Pharmacol 2002, 24 Suppl D:5-12 ==== Tutorials ==== * Tutorial: [[Tutorials/TutVolSource|Volume source estimation]] * Tutorial: [[Tutorials/DeepAtlas|Deep cerebral structures]] * Tutorial: [[Tutorials/TutDipScan|Computing and displaying dipoles]] * Tutorial: [[Tutorials/DipoleFitting|Dipole fitting with FieldTrip]] * Tutorial: [[Tutorials/TutBEst|Maximum Entropy on the Mean (MEM)]] ==== Forum discussions ==== |
Line 275: | Line 469: |
* Forum: Spatial smoothing of sources: http://neuroimage.usc.edu/forums/showthread.php?1409 * Forum: Units for dSPM and sLORETA: [[http://neuroimage.usc.edu/forums/showthread.php?1535-Dipole-strength-units-for-dSPM-and-sLORETA|http://neuroimage.usc.edu/forums/showthread.php?1535]] |
* Forum: Spatial smoothing: http://neuroimage.usc.edu/forums/showthread.php?1409 * Forum: Units for dSPM/sLORETA: [[http://neuroimage.usc.edu/forums/showthread.php?1535-Dipole-strength-units-for-dSPM-and-sLORETA|http://neuroimage.usc.edu/forums/showthread.php?1535]] |
Line 278: | Line 472: |
* Forum: Sign of the MNE values: http://neuroimage.usc.edu/forums/showthread.php?1649#post7014 * Forum: Combining magneto+gradiometers: http://neuroimage.usc.edu/forums/showthread.php?1900 |
* Forum: Sign of the MNE values: [[http://neuroimage.usc.edu/forums/showthread.php?1649#post7014|http://neuroimage.usc.edu/forums/showthread.php?1649]] * Forum: Combine mag+gradiometers: http://neuroimage.usc.edu/forums/showthread.php?1900 * Forum: Combine EEG+fMRI: http://neuroimage.usc.edu/forums/showthread.php?2679 |
Line 281: | Line 476: |
* Forum: Dipole fitting: http://neuroimage.usc.edu/forums/showthread.php?2400 * Forum: Simulate recordings from sources: http://neuroimage.usc.edu/forums/showthread.php?2563 * Forum: Simulate recordings from simulated signals: https://neuroimage.usc.edu/forums/t/simulate-scalp-recording/2421/3 * Forum: Pre-whitening: https://neuroimage.usc.edu/forums/t/10459 * Forum: Debugging weird sLORETA results: https://neuroimage.usc.edu/forums/t/dont-trust-the-source-power-spectrum-results/21265 |
Tutorial 22: Source estimation
Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard M Leahy, Sylvain Baillet
You have in your database a forward model that explains how the cortical sources determine the values on the sensors. This is useful for simulations, but what we need next is to solve the inverse problem: how to estimate the sources when we have the recordings. This tutorial introduces the tools available in Brainstorm for solving this inverse problem. (For backward compatibility, see the ?old tutorials)
Contents
- Ill-posed problem
- Source estimation options
- Computing sources for an average
- Display: Cortex surface
- Why does it look so noisy?
- Display: MRI Viewer
- Display: MRI 3D
- Sign of constrained maps
- Unconstrained orientations
- Source map normalization
- Delete your experiments
- Computing sources for single trials
- Averaging in source space
- Note for beginners
- Averaging normalized values
- Display: Contact sheets and movies
- Model evaluation
- Advanced options: Minimum norm
- Advanced options: LCMV beamformer
- Advanced options: Dipole modeling
- On the hard drive
- Additional documentation
Ill-posed problem
Our goal is to estimate the activity of the thousands of dipoles described by our forward model. However we only have a few hundred spatial measurements as input (the number of sensors). This inverse problem is ill-posed, meaning there are an infinite number of source activity patterns that could generate exactly the same sensor topography. Inverting the forward model directly is impossible, unless we add some strong priors to our model.
Wikipedia says: "Inverse problems are some of the most important and well-studied mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields."
Many solutions to the inverse problem have been proposed in the literature, based on different assumptions on the way the brain works and depending on the amount of information we already have on the effects we are studying. Among the many methods available, in Brainstorm, we present three general approaches to the inverse problem that represent the most widely used methods in MEG/EEG source imaging: minimum-norm solutions, beamformers, and dipole modeling.
These approaches have the advantage of being implemented in an efficient linear form: the activity of the sources is a linear recombination of the MEG/EEG recordings, such that it is possible to solve the inverse problem by applying a linear kernel (in the form of a matrix that multiples the spatial data at each point in time) which is easily stored. Subsequent data manipulation and source visualization is then much simpler, as are comparisons among these techniques.
Below we first describe the minimum norm imaging approach and its options, followed by the beamformer and dipole modeling, both of which are actually quite similar and only use a subset of the options available in the minimum norm approach.
Source estimation options
Before we start estimating the sources for the recordings available in our database, let's start with an overview of the options available. This section focuses on the options for the minimum norm estimates. The other methods are described in advanced sections at the end of this page.
Method
Minimum norm imaging
Estimates the sources as the solution to a linear imaging problem, that can be interpreted in various ways (Tikhonov regularization, MAP estimation). The method finds a cortical current source density image that approximately fits the data when mapped through the forward model. The "illposedness" is dealt with by introducing a regularizer or prior in the form of a source covariance that favors solutions that are of minimum energy (or L2 norm).
Min norm requires specification of a noise and a source covariance matrix. Users can estimate a noise covariance matrix directly from recordings (for example, using pre-stim recordings in event related studies) or simply assume a white-noise identify matrix covariance as described below.The source covariance prior is generated from the options discussed in detail below.
- In contrast to the LCMV beamformer, in which the data covariance is estimated directly from the data, for minimum norm the data covariance is determine by the choice of source and data covariances and the forward model.
LCMV beamformer
- Linearly constrained minimum variance (LCMV) beamformers compute an estimate of source activity at each location through spatial filtering. The spatial data are linearly combined with weights (the spatial filter) chosen separately for each location to ensure that the strength of a dipolar source at that location is correctly estimated (assuming a perfect head model).
- The remaining degrees of freedom in selecting the weights are used to minimize the total output power. This has the effect of suppressing contributions of sources from other locations to the estimated signal at the location of interest.
- It should be noted, however, that correlation between sources can at times lead to partial or full signal cancellation, and the method can be sensitive to accuracy of the head model.
LCMV beamformers require specification of the data covariance matrix, which is assumed to include contributions from background noise and the brain signals of interest. In practice, the data covariance is estimated directly from the recordings. A linear kernel (matrix) is formed from this data covariance matrix and the forward model. This kernel defines the spatial filters applied at each location. Multiplying by the data produces an output beamformer scanning image. These images can either be used directly, as is common practice with LCMV methods, or the largest peak(s) can be fit with a dipolar model at every time instance. See section below on LCMV Beamformer Modeling.
Dipole modeling [TODO]
- In some sense this is the simplest model: we fit a single current dipole at each point in time to the data. We do this by computing a linear kernel (similar to the min norm and LCMV methods) which when multiplied by the data produces a dipole scanning image whose strongest peak represents the most likely location of a dipolar source.
As with LMCV, the dipole scanning images can be viewed directly, or the single best dipole fit (location and orientation) computed, as described in (LINK ?). More details.
Recommended option
- Still under much debate, even among our Brainstorm team. In cases where sources are expected to be focal (e.g. interictal spikes in epileptic patients, or early components of sensory evoked responses) the single dipole can be precise in terms of localization. For cases where sources are expected to be distributed, the min norm method makes the least restrictive source assumptions. LCMV beamformers fall somewhere between these two cases.
- One advantage of Brainstorm is that all three approaches can be easily run and compared. If the results are concordant among all three techniques, then our underlying assumptions of source modeling, head modeling, and data statistics are confirmed. If the results are disparate, then a more in depth study is needed to understand the consequences of our assumptions and therefore which technique may be preferred. The next several sections discuss in detail the options associated with the "mininum norm imaging" method.
Measure [TODO]
The minimum norm estimate computed by Brainstorm represents a measure of the current found in each point of the source grid (either volume or surface). As discussed on the user forum, the units are strictly kept in A-m, i.e. we do not normalize by area (yielding A/m, i.e. a surface density) or volume (yielding A/m^2, i.e. a volume density). Nonetheless, it is common to refer these units as a "source density" or "current density" maps when displayed directly.
More commonly, however, current density maps are normalized. The value of the estimated current density is normalized at each source location by a function of either the noise or data covariance. Practically, this normalization has the effect of compensating for the effect of depth dependent sensitivity and resolution of both EEG and MEG. Current density maps tend to preferentially place source activity in superficial regions of cortex, and resolution drops markedly with sources in deeper sulci. Normalization tends to reduce these effects as nicely shown by (LINK ?). We have implemented the two most common normalization methods: dSPM and sLORETA.
Current density map: Produces a "depth-weighted" linear L2-minimum norm estimate current density using the method also implemented in Matti Hamalainen's MNE software. For a full description of this method, please refer to the MNE manual, section 6, "The current estimates". Units: picoamper-meter (pA-m).
dSPM: Implements dynamical Statistical Parametric Mapping (Dale, 2000). The MNE is computed as above. The noise covariance and linear inverse kernel are then used to also compute estimates of noise variance at each location in the current density map. The MNE current density map is normalized by the square root (standard deviation) of these variance estimates. As a result dSPM gives a z-score statistical map. Units: unitless "z".
sLORETA: Standardized LOw Resolution brain Electromagnetic TomogrAphy (Pasqual-Marqui, 2002). As with dSPM, the MNE current density map is normalized at each point. While dSPM computes the normalization based on the noise covariance, sLORETA replaces the noise covariance with the theoretical data covariance, as is assumed in the minimum norm estimation. The theoretical data covariance is the noise covariance plus the theoretical signal covariance. As discussed in (Pasqual-Marqui 2002), this theoretical data covariance simplifies sLORETA to an alternative form that results in a "resolution" kernel (eq.(17) of (Pasqual-Marqui 2002). (We note that the theoretical data covariance is not the experimental data covariance estimated directly from the data, as is used in beamformers). Units: unitless.
Recommended option: Discussed in the section Source map normalization below.
Source model: Dipole orientations [TODO]
At each point in the source grid, the current dipole may point arbitrarily in three directions. In this section of the options, we describe alternatives for constraining orientation:
Constrained: Normal to cortex: Only for "surface" grids. At each grid point, we model only one dipole, oriented normally to the cortical surface. This is based on the anatomical observation that in the cortex, the pyramidal neurons are mainly organized in macro-columns that are perpendicular to the cortex surface.
Size of the inverse operator: [Nvertices x Nchannels].Loose: Only for "surface" grids. As introduced by (LINK ?), at each point in the surface grid the dipole direction is constrained to be normal to the local cortical surface. Two additional elemental dipoles are also allowed, in the two directions tangential to the cortical surface. As contrasted with "unconstrained," these two tangential elemental dipoles are constrained to have an amplitude that is a fraction of the normal dipole, recommended to be between 0.1 and 0.6. Thus the dipole is only "loosely" constrained to be normal to the local cortical surface.
Size of the inverse operator: [3*Nvertices x Nchannel].Unconstrained: Either "surface" or "volume" grids. At each grid point, we leave undefined the assumed orientation of the source, such that three "elemental" dipoles are needed to model the source. In Brainstorm, our elemental dipoles are in the x, y, and z ("Cartesian") directions, as compared to other software that may employ polar coordinates. Thus for "N" vertices, we are calculating the estimate for "3*N" elemental dipoles.
Size of the inverse operator: [3*Nvertices x Nchannels].Recommended option: The constrained options use one dipole per grid point instead of three, therefore the source files are smaller, faster to compute and display, and more intuitive to process because we don't have to think about recombining the three values into one. On the other hand, in the cases where its physiological assumptions are not verified, typically when using an MNI template instead of the anatomy of the subject, the normal orientation constraint may fail to represent certain activity patterns. Unconstrained models can help in those cases. See further discussion on constrained vs unconstrained solutions below in section Why does it looks so noisy.
Sensors
We automatically detect and display the sensors found in your head model. In the example above, only one type of sensors is found ("MEG"). You can select one or all of the sensors found in your model, such as MEG and EEG.
However, cross-modality calculations are quite dependent on the accuracy by which you have provided adequate covariance calculations and consistency of the head models across sensor types. As of Spring of 2018, we have also elected to NOT account for cross-covariances between different sensor types, since regularization and stability of cross-modalities is quite involved. For multiple sensor types, the recommendation is that you try each individually and then combined, to test for discordance.
Computing sources for an average
Using the above selections, we now discuss explicit directions on how to compute and visualize.
In Run#01, right-click on the average response for the deviant stim > Compute sources [2018].
Select the options: Minimum norm imaging, Current density map, Constrained: Normal to cortex.
- The other menu "Compute sources" launches the interface that was used previously in Brainstorm. We are going to keep maintaining the two implementations in parallel for a while for compatibility and cross-validation purposes.
The result of the computation is displayed as a dependent file of the deviant average because it is related only to this file. In the file comment, "MN" stands for minimum norm and "Constr" stands for "Constrained: normal orientation".
Display: Cortex surface
Right-click on the sources for the deviant average > Cortical activations > Display on cortex.
Double-click on the recordings for the deviant average to have a time reference.
In the filter tab, add a low-pass filter at 40Hz.
- Change the current time (click on the time series figure or use the keyboard arrows) and note it updates the source maps in the 3D figure. You can also use all the menus and shortcuts introduced in the anatomy tutorial (like setting the view with the keys from 0 to 6).
- You can edit the display properties in the Surface tab:
Amplitude: Only the sources that have a value superior to a given percentage of the colorbar maximum are displayed.
Min size: Hide all the small activated regions, ie. the connected color patches that contain a number of vertices smaller than this "min size" value.
Transparency: Change the transparency of the source activity on the cortex surface.
Take a few minutes to understand what the amplitude threshold represents.
The colorbar maximum depends on the way you configured your Sources colormap. If the option "Maximum: Global" is selected, the maximum should be around 150 pA.m. This value is a rough estimate of the maximum amplitude, and this default value is not always adapted to your figure. To edit the maximum value, use the colormap option "Maximum: Custom".
On the screen capture below, the threshold value is set to 20%. It means that only the sources that have a value over 0.20*150 = 30 pA.m are visible.
The threshold level is indicated in the colorbar with a horizontal white line.
At the first response peak (91ms), the sources with high amplitudes are located around the primary auditory cortex, bilaterally, which is what we are expecting for an auditory stimulation.
Why does it look so noisy?
The source maps look very noisy and discontinuous, they show a lot of disconnected patches. This is due to the orientation constraint we imposed on the dipoles orientations. Each value on the cortex should be interpreted as a vector, oriented perpendicular to the surface. Because of the brain’s circumvolutions, neighboring sources can have significantly different orientations, which also causes the forward model response to change quickly with position. As a result, the orientation-constrained minimum norm solution can produce solutions that vary rapidly with position on the cortex resulting in the noisy and disjointed appearance.
It is therefore important not to always interpret disconnected colored patches as independent sources. You cannot expect high spatial resolution with this technique (~5-10mm at best). Most of the time, a cluster of disconnected source patches in the same neighborhood that show the same evolution in time can be interpreted as "there is some significant activity around here, but with some uncertainty as to its precise location".
To get more continuous maps for visualization or publication purposes, you can either smooth the values explicitly on the surface (process "Sources > Spatial smoothing") or use unconstrained source models.
For data exploration, orientation-constrained solutions may be a good enough representation of brain activity, mostly because it is fast and efficient. You can often get a better feeling of the underlying brain activity patterns by making short interactive movies: click on the figure, then hold the left or right arrows of your keyboard.
Activity patterns will also look sharper when we compute dSPM or sLORETA normalized measures (later in this tutorial). In most of the screen captures in the following sections, the contrast of the figures has been enhanced for illustration purposes. Don't worry if it looks a lot less colorful on your screen. Of course, ultimately statistical analysis of these maps is required to make scientific inferences from your data.
Display: MRI Viewer
Right-click on the sources for the deviant average > Cortical activations > Display on MRI (MRI Viewer).
The MRI viewer was introduced in tutorials #2 and #3.
Additionally you can change the current time and amplitude threshold from the Brainstorm window.This figure shows the sources computed on the cortical surface and re-interpolated in the MRI volume. If you set the amplitude threshold to 0%, you would see the thin layer of cortex in which the dipoles where estimated.
- You can configure this figure with the following options:
MIP Anatomy: Checkbox in the MRI Viewer figure. For each slice, display the maximum value over all the slices instead of the original value in the structural MRI ("glass brain" view).
MIP Functional: Same as for MIP Anatomy, but with the layer of functional values.
Smooth level: The sources values can be smoothed after being re-interpolated in the volume. Right-click on the figure to define the size of the smoothing kernel (in number of slices).
Amplitude threshold: In the Surface tab of the Brainstorm window.
Current time: At the top-right of the Brainstorm window (or use the time series figure).
Display: MRI 3D
Right-click on the sources for the deviant average > Cortical activations > Display on MRI (3D).
This view was also introduced in the tutorials about MRI and surface visualization.
Right-click and move your mouse to move the slices (or use the Resect panel of the Surface tab).
Sign of constrained maps
You should pay attention to the sign of the current amplitudes that are given by the minimum norm method: they can be positive or negative and they oscillate around zero. Display the sources on the surface, set the amplitude threshold to 0%, then configure the colormap to show relative values (uncheck the "Absolute values" option), you will see those typical stripes of positive and negative values around the sulci. Double-click on the colorbar after testing this to reset the colormap.
This pattern is due to the orientation constraint imposed on the dipoles. On both sides of a sulcus, we have defined dipoles that are very close to each other, but with opposite orientations. If we have a pattern of activity on one side of a suclus that can be modeled as a current dipole (green arrow), the limited spatial resolution of the minimum norm model will blur this source using the dipoles that are available in the head model (red and blue arrows). Because of the dipoles’ orientations, the minimum norm images produces positive values (red arrows) on one side of the sulcus and negative on the other side (blue arrows).
When displaying the cortical maps at one time point, we are usually not interested in the sign of the minimum norm values but rather by their amplitude. This is why we always display them by default with the colormap option "absolute values" selected.
However, we cannot simply discard the sign of these values because we need these for other types of analysis, typically time-frequency decompositions and connectivity analysis. For estimating frequency measures on the source maps it is essential that we retain the sign of the time course at each location so that the correct oscillatory frequencies are identified.
Unconstrained orientations
In cases where the orientation constraint imposed on the dipole orientations produces implausible results, it is possible to relax it partially (option "loose constraints") or completely (option "unconstrained"). This produces a vector (3 component) current source at each location which can complicate interpretation, but avoids some of the noisy and discontinuous features in the current map that are often seen in the constrained maps. Unconstrained solutions are particularly appropriate when using the MNI template instead of the subject's anatomy, or when studying deeper or non-cortical brain regions for which the normal to the cortical surface obtained with FreeSurfer or BrainSuite is unlikely to match any physiological reality.
In terms of data representation, the option "unconstrained" and "loose constraints" are very similar. Instead of using one dipole at each cortical location, a base of three orthogonal dipoles is used. Here we will only illustrate the fully unconstrained case.
In Run#01, right-click on the average response for the deviant stim > Compute sources [2018].
Select the options: Minimum norm imaging, Current density map, Unconstrained.Double-click on the new source file for the deviant average, open the time series simultaneously. The two brain maps below represent the same file at 91ms, with different colormap options (absolute values on the left, relative values on the right). Explanations below.
We have to be careful with the visual comparisons of constrained and unconstrained source maps displayed on the cortex surface, because they are very different types of data. In unconstrained source maps, we have three dipoles with orthogonal orientations at each cortex location, therefore we cannot represent all the information at once. To display them as an activity map, Brainstorm computes the norm of the vectorial sum of the three orientations at each vertex.
S = sqrt(Sx2 + Sy2 + Sz2)
This explains that we only observe positive values (no blue values when the colormap is set to display positive and negative values): the norm displayed at each vertex is always positive. The underlying values along each orientation (x,y,z) can be positive or negative and oscillate around zero in time, but we cannot get access to this information with these static cortical maps.
The maps we observe here look a lot smoother than the constrained sources we computed earlier. This can be explained by the fact that there is no sharp discontinuity in the forward model between two adjacent points of the grid for a vector dipole represented in Cartesian coordinates while the normal to the surface for two nearby points can be very different, resulting in rapidly changing forward models for the constrained case.
Delete the unconstrained file, we will not explore this option in the introduction tutorials. You can refer to the tutorial EEG and epilepsy for an example of analysis using unconstrained sources.
Source map normalization
The current density values returned by the minimum norm method have a few problems:
- They depend a lot on the SNR of the signal, which may vary significantly between subjects. Their amplitude is therefore difficult to interpret directly.
- The values tend to be higher at the surface of the brain (close to the sensors).
- The maps are sometimes patchy and difficult to read.
Normalizing the current density maps with respect to a reference level (estimated from noise recordings, pre-stimulus baseline or resting state recordings) can help with all these issues at the same time. In the case of dSPM and sLORETA, the normalizations are computed as part of the inverse routine and based on noise and data covariances, respectively. While dSPM does produce a Z-score map, we also provide an explicit Z-score normalization that offers the user more flexibility in defining a baseline period over which Brainstorm computes the standard deviation for normalization.
The normalization options do not change the temporal dynamics of your results when considering a single location but they do alter the relative scaling of each point in the min norm map. If you look at the time series associated with one given source, it will be exactly the same for all normalizations, except for a scaling factor. Only the relative weights change between the sources, and these weights do not change over time.
dSPM, sLORETA
In Run#01, right-click on the average recordings for the deviant stim > Compute sources [2018].
Select successively the two normalization options: dSPM, sLORETA, (constrained).
Double-click on all of them to compare them (screen capture at 143ms):
Current density maps: Tends to highlight the top of the gyri and the superficial sources.
dSPM: Tends to correct this behavior and may give higher values in deeper areas. The values obtained are unitless and similar to Z-scores, therefore they are easier to interpret. They are by default not scaled with the number of averages. To obtain correctly scaled dSPM values, one has to call the process "Sources > Scale averaged dSPM", as explained in the advanced section Averaging normalized values.
sLORETA: Produces smoother maps where all the potentially activated area of the brain (given to the low spatial resolution of the source localization with MEG/EEG) is shown as connected, regardless of the depth of the sources. The maps are unitless, but unlike dSPM cannot be interpreted as Z-scores so are more difficult to interpret.
Z-score
The Z-transformation converts the current density values to a score that represents the number of standard deviations with respect to a baseline period. We define a baseline period in our file (in this case, the pre-stimulus baseline) and compute the average and standard deviation for this segment. Then for every time point we subtract the baseline average and divide by the baseline standard deviation. Z = (Data - μ) / σ
- This measure tells how much a value deviates from the baseline average, in number of times the standard deviation. This is done independently for each source location, so the sources with a low variability during baseline will be more salient in the cortical maps post-stimulus.
In Process1: Select the constrained current density maps (file MN: MEG(Constr)).
Run process "Standardize > Baseline normalization", [-100,-1.7]ms, Z-score transformation
Do not select "Use absolute values": We want the sign of the current values.
Double-click on the new normalized file to display it on the cortex (file with the "| zscore" tag).
You can see that the cortical maps obtained in this way are very similar to the other normalization approaches, especially with the dSPM maps.
A value of 3 in this figure means: at this vertex, the value is 3 times higher than the standard deviation from zero during the baseline. If the values during the baseline follow a normal distribution N(μ,σ2), then the values we computed follow a N(0,1)=Z distribution. We can get a level of significance from this well known distribution, for instance a value Z=1.96 corresponds to a p-value of 0.05. These questions will be discussed in more details in the statistics tutorial.
The Z-normalized source maps are not impacted by the visualization filters. If you open simultaneously the time series and all the files you have now (MN, dSPM, sLORETA, Z-score) and modify the options in the Filter tab, all the figures are updated except for the Z-score one. We can filter easily all the linear models (MN, dSPM, sLORETA), but we would lose the interesting properties of the Z-values if we were filtering them (the values would not follow a Z-distribution anymore).
If the baseline and the active state are not in the same file, you can use the Process2 tab: place the baseline in the left list (Files A) and the file to normalize in the right list (Files B).
Typical recommendations
Use non-normalized current density maps for:
- Computing shared kernels applied to single trials.
- Averaging files across MEG runs.
- Computing time-frequency decompositions or connectivity measures on the single trials.
Use normalized maps (dSPM, sLORETA, Z-score) for:
- Estimating the sources for an average response.
- Exploring visually the average response (ERP/ERF) at the source level.
- Normalizing the subject averages before a group analysis.
- Avoid averaging normalized maps (or computing any additional statistics)
- Recommended normalization approach:
- It is difficult to declare that one normalization technique is better than another. They have different advantages and may be used in different cases. Ideally, they should all converge to similar observations and inferences. If you obtain results with one method that you cannot reproduce with the others, you should question your findings.
- dSPM and sLORETA are linear measures and can expressed as imaging kernels, therefore they are easier to manipulate in Brainstorm. sLORETA maps can be smoother but they are difficult to interpret. dSPMs, as z-score maps, are much easier to understand and interpret.
- Z-normalized current density maps are also easy to interpret. They represent explicitly a "deviation from experimental baseline" as defined by the user. In contrast, dSPM indicates the deviation from the data that was used to define the noise covariance used in computing the min norm map.
Delete your experiments
Select all the source files you computed until now and delete them.
Computing sources for single trials
Because the minimum norm model is linear, we can compute an inverse model independently from the recordings and apply it on the recordings when needed. We will now illustrate how to compute a shared inverse model for all the imported epochs.
Right-click on the head model or the folder for Run#01 > Compute sources [2018].
Select: Minimum norm imaging, Current density map, Constrained: Normal to cortex
Because we did not request to compute an inverse model for a specific block of recordings, it computed a shared inverse model. If you right-click on this new file, you get a warning message: "Inversion kernel". It does not contain any source map, but only the inverse operator that will allow us to convert the recordings into source maps.
The database explorer now shows one source link to this inverse model for each block of recordings available in this folder, single trials and averages. These links are not real files saved on the hard drive, but you can use them exactly like the previous source files we calculated for the deviant average. If you load a link, Brainstorm loads the corresponding MEG recordings, loads the inverse kernel and multiplies the two on the fly before displaying it. This optimized approach saves a lot of computation time and lot of space on the hard drive.
Averaging in source space
Computing the average
First compute the same source model for the the second acquisition run.
In Run#02, right-click on the head model or the folder > Compute sources [2018].
Select: Minimum norm imaging, Current density map, Constrained: Normal to cortex
Now we have the source maps available for all the recordings, we can average them in source space across runs. This allows us to average MEG recordings that were recorded with different head positions (in this case Run#01 and Run#02 have different channel files so they could potentially have different head positions preventing the direct averaging at the sensor level).
Thanks to the linearity of the minimum norm model: MN(Average(trials)) = Average(MN(trials))
The two following approaches are equivalent:- Averaging the sources of all the individual trials across runs,
- Averaging the sources for the sensor averages that we already computed for each run.
- We will use the second option: using the sources for the sensor-level averages. It is a lot faster because it needs to read 4 files (one average file per run and per condition) instead of 456 files (total number of good trials in the two runs).
Drag and drop to the Process1 tab the average recordings for Run01 and Run02, then press the [Process sources] button on the left to select the source files instead of the MEG recordings.
Run process "Average > Average files":
Select "By trial group (subject average)" to average together files with similar names.
Select "Arithmetic average" function.
Check "Weighted average" to account for the different numbers of trials in both runs.
The two averages that are produced (one for each condition) are saved in the folder Intra-subject. This is where all the files computed using information from multiple folders within the same subject are saved. If you prefer to have them somewhere else, you can create new folders and move them there, just like you would do with a regular file explorer.
The file comments say "2 files" because they were computed from two averages each (one from each run), but the number of corresponding trials is correctly updated in the file structure.
Right-click on each of them > File > View file contents, and check the Leff field:
78 trials for the deviant condition, 378 trials for the standard condition.Double-click on the source averages to display them (deviant=top, standard=bottom).
Open the sensor-level averages as a time reference.
Use the predefined view "Left, Right" for looking at the two sides at the same time (shortcut: "7").
Visualization filters
Note that opening the source maps can be very long because of the filters for visualization. Check in the Filter tab, you may have a filter applied with the option "Filter all results" selected. In the case of averaged source maps, the 15,000 source signals are filtered on the fly when you load a source file. This filtering of the full source files can take a significant amount of time, consider unchecking this option if the display is too slow on your computer.
- It was not a problem until now because the source files were saved in the compact form (Kernel*recordings) and the visualization filters were applied on the recordings, then projected to the source space. This fast option is not available anymore with these averages across runs.
- The visualization filters will not be available anymore after we apply a Z-score normalization. If we want to display Z-score source maps that are smoothed in time, we will have to apply explicitly the filters on the file, with the Process1 tab.
Low-pass filter
- Clear the Process1 list, then drag and drop the new averages in it.
Run process "Pre-process > Band-pass filter": [0,40] Hz
Epochs are too short: Look at the filter response, the expected transient duration is at least 78ms. The first and last 78ms of the average should be discarded after filtering. However, doing this would get rid of almost all the 100ms baseline, which we need for normalization. As mentioned here, we should have been importing longer epochs in order to filter and normalize the averages properly.
Z-score normalization
- In Process1, select the two filtered averages.
Run process "Standardize > Baseline normalization", baseline=[-100,-1.7]ms, Z-score.
Four new files are accessible in the database: two filtered and two filtered+normalized.
Double-click on the source averages to display them (deviant=top, standard=bottom).
- The Z-score source values at 90ms are higher for the standard condition (~25) than for the deviant condition (~15). We observe this because the two conditions have very different signal-to-noise ratios. The standard condition has about 5x more trials, therefore the standard deviation over the baseline is a lot lower, leading to higher Z-score.
Delete the non-normalized filtered files, we will not use them in the following tutorials.
Note for beginners
Everything below is advanced documentation, you can skip it for now.
Averaging normalized values
Averaging normalized source maps within a single subject requires more attention than averaging current density maps. Since averaging reduces variance, the resulting source maps will have a different statistical distribution than the nominal distribution of the individual maps.
For example, averaging z-score normalized maps will result in maps with variance less than 1. The same holds true for dSPM maps. Assuming independent samples, the variance of an average of N maps drops by 1/N. For this reason, it is generally recommended to select the "Weighted average" option in the ‘Average files’ process when averaging trials or source maps (which performs mean(x) = (N1*sum(x1(i)) + N2*sum(x2(i)) + …)/ (N1+N2+…) ) in order to keep track of the number of samples and the actual variance of averaged statistical maps.
dSPM
- An averaged dSPM map has variance equal to 1/N (and thus standard deviation equal to 1/sqrt(N)). Therefore one could multiply the averaged dSPM map by sqrt(N) in order to maintain variance 1 under the null hypothesis. In previous versions of Brainstorm, this was done automatically when visualizing the files, and when averaging source maps with the option "Adjust normalized source maps for SNR increase". To simplify the interface and make the interpretation of maps more intuitive and consistent with other cases (min-norm, z-scored), we now dropped this option. Thus averaging dSPM maps now results in maps with variance less than 1, and is consistent with handling min-norm, z-scored and sLORETA maps.
Ajusting an averaged dSPM file by this sqrt(N) factor is still possible manually, eg. in order to visualize cortical maps that can be interpreted as Z values. Select the average dSPM files in Process1 and run process "Sources > Scale averaged dSPM". This should be used only for visualization and interpretation, scaled dSPM should never be averaged or used for any other statistical analysis.
Z-score
- The same SNR issues arise while averaging Z-scores: the average of the Z-scores is lower than the Z-score of the average.
When computing averages at the subject level: Always avoid averaging Z-score maps.
Average the current density maps, then normalize.
sLORETA
- This normalization is not based on the SNR of signal, but rather on the spatial smoothness of the maps. Managing these maps is similar to min-norm maps: the variance of the individual maps is not explicitly modeled or known analytically.
- As in other cases, sLORETA(Average(trials)) = Average(sLORETA(trials)), and this relationship is guaranteed to hold with averaging uneven number of samples when using the option "Weighted average".
Display: Contact sheets and movies
A good way to represent what is happening in time is to generate contact sheets or videos. Right-click on any figure and go to the menu Snapshot to check out all the possible options. For a nicer result, take some time to adjust the size of the figure, the amplitude threshold and the colormap options (hiding the colorbar can be a good option for contact sheets).
A time stamp is added to the captured figure. The size of the text font is fixed, so if you want it to be readable in the contact sheet, you should make you figure very small before starting the capture. The screen captures below where produced with the colormap "hot".
Contact sheet: Right-click on any figure > Snapshot > Time contact sheet: Figure
Movies: Right-click on any figure > Snapshot > Movie (time): All figures
Model evaluation
One way to evaluate the accuracy of the source reconstruction if to simulate recordings using the estimated source maps. This is done simply by multiplying the source time series with the forward model:
MEG_simulated [Nmeg x Ntime] = Forward_model [Nmeg x Nsources] * MN_sources [Nsources x Ntime]
Then you can compare visually the original MEG recordings with the simulated ones. More formally, you can compute an error measure from the residuals (recordings - simulated).
To simulate MEG recordings from a minimum norm source model, right-click on the source file, then select the menu "Model evaluation > Simulate recordings".
Open side-by-side the original and simulated MEG recordings for the same condition:
Advanced options: Minimum norm
Right-click on the deviant average for Run#01 > Compute sources [2018].
Click on the button [Show details] to bring up all the advanced minimum norm options.
Depth weighting
Briefly, the use of various depth weightings was far more debated in the 1990s, before the introduction of MNE normalization via dSPM, sLORETA, and other "z-scoring" methods, which mostly cancel the effects of depth weighting (put another way, after normalization min norm results tend to look quite similar whether depth weighting is used or not).
By modifying the source covariance model at each point in the source grid, deeper sources are "boosted" to increase their signal strength relative to the shallower dipoles; otherwise, the resulting MNE current density maps are too dominated by the shallower sources. If using dSPM or sLORETA, little difference in using depth weighting should be noted. To understand how to set these parameters, please refer to the MNE manual. (options --depth, --weightexp and --weightlimit).
Noise covariance regularization [TODO]
MNE and dipole modeling are best done with an accurate model of the noise covariance, which is generally computed from experimental data. As such, these estimates are themselves prone to errors that arise from relatively too few data points, weak sensors, and strange data dependencies that can cause the eigenspectrum of the covariance matrix to be illconditioned (i.e. a large eigenvalue spread or matrix condition number). In Brainstorm, we provide several means to "stabilize" or "regularize" the noise covariance matrix, so that source estimation calculations are more robust to small errors.
Regularize noise covariance: The L2 matrix norm is defined as the largest eigenvalue of its eigenspectrum. This option adds to the covariance matrix a diagonal matrix whos entries are a fraction of the matrix norm. The default is 0.1, such that covariance matrix is stabilized by adding to it an identity matrix that is scaled to 10% of the largest eigenvalue.
Median eigenvalue: The eigenspectrum of MEG data can often span many decades, due to highly colored spatial noise, but this broad spectrum is generally confined to the first several modes only. Thus the L2 norm is many times greater than the majority of the eigenvalues, and it is difficult to prescribe a conventional regularization parameter. Instability in the inverse is dominated by defects found in the smallest eigenvalues. This approach stabilizes the eigenspectrum by replicating the median (middle) eigenvalue for the remainder of the small eigenvalues.
Diagonal noise covariance: Deficiencies in the eigenspectrum often arise from numerical inter-dependencies found among the channels, particularly in covariance matrices computed from relatively short sequences of data. One common method of stabilization is to simply take the diagonal of the covariance matrix and zero-out the cross-covariances. Each channel is therefore modeled as independent of the other channels. The eigenspectrum is now simply the (sorted) diagonal values.
No covariance regularization: We simply use the noise covariance matrix as computed or provided by the user.
Automatic shrinkage: Stabilization method of Ledoit and Wolf (2004), still under testing in the Brainstorm environment. Basically tries to estimate a good tradeoff between no regularization and diagonal regularization, using a "shrinkage" factor. See Brainstorm code "bst_inverse_linear_2018.m" for notes and details.
Recommended option: This author (Mosher) votes for the median eigenvalue as being generally effective. The other options are useful for comparing with other software packages that generally employ similar regularization methods. [TODO]
Regularization parameter [TODO]
In minimum norm estimates, as mentioned above in the comparisons among methods, the data covariance matrix is essentially synthesized by adding the noise covariance matrix to a modeled signal covariance matrix. The signal covariance matrix is generated by passing the source prior through the forward model. The source prior is in turn prescribed by the source model orientation and the depth weighting.
A final regularization parameter, however, determines how much weight the signal model should be given relative to the noise model, i.e. the "signal to noise ratio" (SNR). In Brainstorm, we follow the definition of SNR as first defined in the original MNE software of Hamalainen. The signal covariance matrix is "whitened" by the noise covariance matrix, such that the whitened eigenspectrum has elements in terms of SNR (power). We find the mean of this spectrum, then take the square root to yield the average SNR (amplitude). The default in MNE and in Brainstorm is "3", i.e. the average SNR (power) is 9.
Signal-to-noise ratio: Use SNR of 3 as the classic recommendation, as discussed above.
RMS source amplitude: An alternative definition of SNR, but still under test and may be dropped. [TODO]
Output mode
As mentioned above, these methods create a convenient linear imaging kernel that is "tall" in the number of elemental dipoles (one or three per grid point) and "wide" only in the number of sensors. At subsequent visualization time, we efficiently multiply the kernel with the data matrix to compute the min norm images.
For some custom purposes, however, a user may find it convenient to pre-multiply the data matrix and generate the full source estimation matrix. This would only be recommended in small data sets, since the full results can become quite large.
Kernel only: Saves only the linear inverse operator, a model that converts sensor values into source values. The size of this matrix is: number of sources (15000) x number of MEG sensors (274). The multiplication with the recordings is done on the fly by Brainstorm in a transparent way. For long recordings or numerous epochs, this form of compact storage helps saving a lot of disk space and computation time, and it speeds up significantly the display. Always select this option when possible.
Full results: Saves in one big matrix the values of all the sources (15,000) for all the time samples (361). The size in memory of such a matrix is about 45Mb for 600ms of recordings. This is still reasonable, so you may use this option in this case. But if you need to process longer recordings, you may face "Out of memory" errors in Matlab, or fill your hard drive quickly.
- Full results [15000x361] = Inverse kernel [15000x274] * Recordings [274x361]
Advanced options: LCMV beamformer
As mentioned in the introduction above, two other methods can be selected for source estimation, a beamformer and dipole modeling. In this section, we review the options for the beamformer. On top of the noise covariance matrix, you need to estimate a data covariance matrix in order to enable the option "LCMV beamformer" in the interface.
Measure
The only option "Pseudo Neural Activity Index" (PNAI), is named after the definition of the Neural Activity Index (NAI). We have modified Van Veen’s definition to rely strictly on the data covariance, without need for a separate noise covariance matrix, but the basic premise is the same as in dSPM, sLORETA, and other normalizations. Viewing the resulting "map," in an identical manner to that with MNE, dSPM, and sLORETA described above, reveals possibly multiple sources as peaks in the map. The PNAI scores analogously to z-scoring.
Dipole orientations
We recommend you choose "unconstrained" and let the later Dipole scanning process, which finds the best fitting dipole at each time point, optimize the orientation with respect to the data.
Data covariance regularization
Same definitions as in MNE, only applied to the data covariance matrix, rather than the noise covariance matrix. Our recommendation is to use median eigenvalue.
Advanced options: Dipole modeling
Dipole modeling fits a single dipole at each potential source location to produce a dipole scanning map. This map can be viewed as a indication of how well, and where, the dipole fits at each time point. However, we recommend using the subsequent best-dipole fitting routine (dipole scanning) to determine the final location and orientation of the dipole (one per time point). Please note that this function does not fit multiple simultaneous dipoles.
Although not widely recognized, dipole modeling and beamforming are more alike than they are different – when comparing the inverse operators required to compute the dipole scanning map (dipole modeling) and the beamformer output map (LCMV), we see that they differ only in that the former uses an inverse noise covariance matrix while the latter replaces this with the inverse of the data covariance.
Measure
This field is now missing, but the resulting imaging kernel file is directly analogous to the PNAI result from LCMV beamforming. The user can display this scanning measure just as with the LCMV case, where again the normalization and units are a form of z-scoring.
Dipole orientations
Use "unconstrained source" modeling and let the process "dipole scanning" optimize the orientation of the dipole for every time instance.
Noise covariance regularization
Similarly, use "median eigenvalue".
The tutorial "MEG current phantom (Elekta)" demonstrates dipole modeling of 32 individual dipoles under realistic experimental noise conditions.
On the hard drive
Constrained shared kernel
Right-click on a shared inverse file in the database explorer > File > View file contents.
Structure of the source files: results_*.mat
Mandatory fields:
ImagingKernel: [Nsources x Nchannels] Linear inverse operator that must be multiplied by the recordings in order to get the full source time series. If defined, ImageGridAmp must be empty.
ImageGridAmp: [Nsources x Ntime] Full source time series, in Amper.meter. If this field is defined, ImagingKernel must be empty. If you want this field to be set instead of ImagingKernel, make sure you select the advanced option Full results when estimating the sources.
Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.
nComponents: Number of dipoles per grid point: 1=Constrained, 3=Unconstrained, 0=Mixed. In the case of mixed head models, the atlas GridAtlas documents region by region how the list of grid points matches the list of dipoles.
Function: Type of values currently saved in the file: 'mn', 'mnp', 'dspm', 'dspm2018', 'dspm2018sc', 'sloreta', 'lcmv', 'lcmvp', 'lcmvnai', 'lcmvpow', 'gls', 'glsp', 'glsfit', 'glschi', 'zscore', 'ersd'...
HeadModelType: Type of source space used for this head model ('surface', 'volume', 'mixed').
HeadModelFile: Relative path to the head model used to compute the sources.
SurfaceFile: Relative path to the cortex surface file related with this head model.
Atlas: Used only by the process "Sources > Downsample to atlas".
GridLoc: [Nvertices x 3], (x,y,z) positions of the grid of source points. In the case of a surface head model, it is empty and you read directly the positions from the surface file.
GridOrient: [Nvertices x 3], direction of the normal to the surface for each vertex point (copy of the 'VertNormals' matrix of the cortex surface). Empty in the case of a volume head model or unconstrained sources.
GridAtlas: Atlas "Source model" used with mixed source models.
GoodChannel: [1 x Nchannels] Array of channel indices used to estimate the sources.
DataFile: Relative path to the recordings file for which the sources where computed. If this field is set, the source file appears as a dependent of the DataFile.
Comment: String displayed in the database explorer to represent this file.
History: Operations performed on the file since it was create (menu "View file history").
Optional fields:
Options: Structure that contains all the options of the inverse calculation. This is saved in the file only for bookkeeping.
Whitener: Noise covariance whitener computed in bst_inverse_linear_2018.m
DataWhitener: Data covariance whitener computed in bst_inverse_linear_2018.m
SourceDecompVa: [3 x Nsources] Concatenated right singular vectors from the SVD decomposition of the whitened leadfield for each source (only for unconstrained sources).
SourceDecompSa: [3 x Nvertices] Vector diagonal of the singular values from the SVD decomposition of the whitened leadfield for each source (only for unconstrained sources).
Std: For averaged files, number of trials that were used to compute this file.
DisplayUnits: String, force the display of this file using a specific type of units.
ChannelFlag: [Nchannels x 1] Copy of the ChannelFlag field from the original data file.
Leff: Effective number of averages. For averaged files, number of trials that were used to compute this file. For source files that are attached to a data file, we use the Leff field from the data file.
Full source maps
In Intra-subject, right-click on one of the normalized averages > File > View file contents.
This file has the same structure as a shared inverse kernel, with the following differences:
It contains the full time series (ImageGridAmp) instead of an inverse operator (ImagingKernel).
The Z-score process updated the field Function ('mn' => 'zscore')
Source links
- The links are not real files on the hard drive, if you select the menu "View file contents" for any of them it would display the structure of the corresponding shared kernel.
They are saved in the database as strings with a specific structure: "link|kernel_file|data_file". This string associates a shared inverse operator with some recordings. The two files have to be available to load the this file. All the functions in Brainstorm are equipped to reconstruct the full source matrix dynamically.
Filename tags
_KERNEL_: Indicates that the file contains only an inverse kernel, it needs to be associated with recordings to be opened.
Useful functions
in_bst_results(ResultsFile, LoadFull, FieldsList): Load a source file and optionally reconstruct the full source time series on the fly (ImagingKernel * recordings).
in_bst(FileName, TimeBounds, LoadFull): Load any Brainstorm data file with the possibility to load only a specific part of the file.
bst_process('LoadInputFile', FileName, Target, TimeWindow, OPTIONS): The most high-level function for reading data files, can compute scout values on the fly.
Additional documentation
Articles
Minimum norm: Baillet S, Mosher JC, Leahy RM
Electromagnetic brain mapping, IEEE SP MAG 2001.dSPM: Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E
Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron 2000 Apr, 26(1):55-67sLORETA: Pascual-Marqui RD
Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details, Methods Find Exp Clin Pharmacol 2002, 24 Suppl D:5-12
Tutorials
Tutorial: Volume source estimation
Tutorial: Deep cerebral structures
Tutorial: Computing and displaying dipoles
Tutorial: Dipole fitting with FieldTrip
Tutorial: Maximum Entropy on the Mean (MEM)
Forum discussions
Forum: Minimum norm units (pA.m): http://neuroimage.usc.edu/forums/showthread.php?1246
Forum: Imaging resolution kernels: http://neuroimage.usc.edu/forums/showthread.php?1298
Forum: Spatial smoothing: http://neuroimage.usc.edu/forums/showthread.php?1409
Forum: Units for dSPM/sLORETA: http://neuroimage.usc.edu/forums/showthread.php?1535
Forum: EEG reference: http://neuroimage.usc.edu/forums/showthread.php?1525#post6718
Forum: Sign of the MNE values: http://neuroimage.usc.edu/forums/showthread.php?1649
Forum: Combine mag+gradiometers: http://neuroimage.usc.edu/forums/showthread.php?1900
Forum: Combine EEG+fMRI: http://neuroimage.usc.edu/forums/showthread.php?2679
Forum: Residual ocular artifacts: http://neuroimage.usc.edu/forums/showthread.php?1272
Forum: Dipole fitting: http://neuroimage.usc.edu/forums/showthread.php?2400
Forum: Simulate recordings from sources: http://neuroimage.usc.edu/forums/showthread.php?2563
Forum: Simulate recordings from simulated signals: https://neuroimage.usc.edu/forums/t/simulate-scalp-recording/2421/3
Forum: Pre-whitening: https://neuroimage.usc.edu/forums/t/10459
Forum: Debugging weird sLORETA results: https://neuroimage.usc.edu/forums/t/dont-trust-the-source-power-spectrum-results/21265