> Require an estimation of the noise at the level of the sensors (noise covariance matrix). * '''Dipole modeling''': __?__ * '''LCMV beamformer''': __?__<

>Require both a noise covariance matrix and a data covariance matrix (representation of the effect we are trying to localize in the brain, covariance of the latencies of interest). * __'''Recommended option'''__: Provided that we know at what latencies to look at, we can compute a correct data covariance matrix and may obtain a better spatial accuracy with a beamformer. However, in many cases we don't exactly know what we are looking at, the risks of misinterpretation of badly designed beamforming results are high. Brainstorm tends to favor minimum norm solutions, which have the advantage of needing less manual tuning for getting acceptable results. === Measure === The minimum norm estimates are a measure of the current density flowing at the surface of the cortex. To visualize these results and compare them between subjects, we can normalize the MNE values to get a standardized level of activation with respect to the noise or baseline level (dSPM, sLORETA, MNp). * '''Current density map''': Whitened and depth-weigthed linear L2-minimum norm estimates algorithm inspired from Matti Hamalainen's MNE software. For a full description of this method, please refer to the [[http://www.nmr.mgh.harvard.edu/meg/manuals/MNE-manual-2.7.pdf|MNE manual]], section 6, "The current estimates". <

>Units: picoamper per meter (pA.m). * '''dSPM''': Noise-normalized estimate (dynamical Statistical Parametric Mapping [Dale, 2000]). Its computation is based on the MNE solution. <

>Units: Unitless ratio [ '''???''' ] * '''sLORETA''': Noise-normalized estimate using the sLORETA approach (standardized LOw Resolution brain Electromagnetic TomogrAphy [Pasqual-Marqui, 2002]). sLORETA solutions have in general a smaller location bias than either the expected current (MNE) or the dSPM. The noise covariance is not used at all for the standardization process, it is purely based on the smoothness of the maps. <

>Units: Square root of units of current (MNE/sqrt(MNE) => (pA.m)^1/2^). * '''MNp''': ? <

>Units: Unitless ratio [ '''???''' ] * __'''Recommended option'''__: Discussed in the section "Source map normalization". === Source orientation === * '''Constrained: Normal to cortex''': Only one dipole at each vertex of the cortex surface, oriented normally to the surface. This is based on the anatomical observation that in the cortex, the neurons are mainly organized in macro-columns that are perpendicular to the cortex surface.<

>Size of the inverse operator: [nVertices x nChannel]. * '''Constrained: Optimal orientation''': Only one dipole at each vertex of the cortex surface, oriented normally to the surface. This is based on the anatomical observation that in the cortex, the neurons are mainly organized in macro-columns that are perpendicular to the cortex surface.<

>Size of the inverse operator: [nVertices x nChannel]. * '''Unconstrained''': At each vertex of the cortex surface, we define a base of three dipoles with orthogonal directions, then we estimate the sources for the three orientations independently. <

>Size of the inverse operator: [3*nVertices x nChannel]. * '''Loose''': A version of the "unconstrained" option with a weak orientation constraint that emphasizes the importance of the sources with orientations that are close to the normal to the cortex. The value associated with this option set how "loose" should be the orientation constrain (recommended values in MNE are between 0.1 and 0.6, --loose option). <

>Size of the inverse operator: [3*nVertices x nChannel]. * __'''Recommended option'''__: ?<

>The constrained options use one dipole per orientation instead of three, therefore the source maps are smaller, faster to compute and display, and much more intuitive to process because we don't have to think about recombining the three values in one. However the normal orientation constraint is most of the time too strong and not realistic. <

>Unconstrained sources look smoother and nicer but are not necessarily more accurate. == Computing sources for a single average == * In Run#01, right-click on the average response for the '''deviant''' stim > '''Compute sources [2015]'''.<

>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Constrained''': Normal to cortex. <

><

> {{attachment:minnorm_single.gif||height="462",width="492"}} * The other menu "Compute sources" brings the interface that was used previously in Brainstorm. We are going to keep maintaining the two implementations in parallel for a while for compatibility and cross-validation purposes. * The result of the computation is displayed as a dependent of the deviant average because it is related only to this file. In the file comment, "MN" stands minimum norm and "Constr" stands for "Constrained: normal orientation". <

><

> {{attachment:minnorm_single_tree.gif}} == Display: Cortex surface == * Right-click on the sources for the deviant average > Cortical activations > '''Display on cortex'''.<

><

> {{attachment:minnorm_single_popup.gif||height="167",width="380"}} * Double-click on the '''recordings '''for the deviant average to have time reference. <

>In the filter tab, add a '''low-pass filter at 100Hz'''.<

><

> {{attachment:display_cortex.gif||height="163",width="482"}} * Change the current time (click on the time series figure or use the keyboard arrows) and note it updates the source maps in the 3D figure. You can also use all the menus and shortcuts introduced in the anatomy tutorial (like setting the view with the keys from 0 to 6). * You can edit many display properties from the Surface tab: * '''Amplitude''': Only the sources that have a value superior than a given percentage of the colorbar maximum are displayed. * '''Min. size''': Hide all the small activated regions, ie. the connected color patches that contain a number of vertices smaller than this "min size" value. * '''Transparency''': Change the transparency of the sources on the cortex. * Take a few minutes to understand what the '''amplitude threshold''' represents. * The colorbar maximum depends on the way you configured your ''Sources ''colormap. If the option "Maximum: Global" is selected, the maximum should be around 30 pA.m. This value is a rough estimate of the maximum amplitude, sometimes you may have to redefine it manually. * On the screen capture below, the threshold value is set to 90%. It means that only the sources that have a value over 0.90*30 = 27 pA.m are visible. <

>The threshold level is indicated in the colorbar with a horizontal white line. * At the first response peak (91ms), the sources with high amplitudes are located around the primary auditory cortex, which is what we are expecting for an auditory stimulation. <

><

> {{attachment:display_sliders.gif||height="211",width="449"}} == Display: MRI Viewer == * Right-click on the source file > Cortical activations > '''Display on MRI (MRI Viewer)'''. * The MRI viewer was introduced in tutorials [[Tutorials/ImportAnatomy|#2]] and [[Tutorials/ExploreAnatomy|#3]]. <

>Additionally you can change the current time and amplitude threshold from the Brainstorm window. * This figure shows the sources computed on the surface surface and re-interpolated in the MRI volume. If you set the amplitude threshold to 0%, you would see the thin layer of cortex in which the dipoles where estimated. <

><

> {{attachment:display_mriviewer.gif||height="356",width="330"}} * You can configure this figure with the following options: * '''MIP Anatomy''': Checkbox in the MRI Viewer figure. For each slice, display the maximum value over all the slices instead of the original value in the structural MRI ("glass brain" view). * '''MIP Functional''': Same thing but with the layer of functional values. * '''Smooth level''': The sources values can be smoothed after being re-interpolated in the volume. Right-click on the figure to define the size of the smoothing kernel. * '''Amplitude threshold''': In the Surface tab of the Brainstorm window. * '''Current time''': At the top-right of the Brainstorm window (or use the time series figure). * {{attachment:display_smooth.gif||height="356",width="363"}} == Display: MRI 3D == * Right-click on the source file > Cortical activations > '''Display on MRI (3D)'''. * This view was also introduced in the tutorials about MRI and surface visualization.<

>Right-click and move your mouse to move the slices. <

><

> {{attachment:display_mri3d.gif||height="203",width="405"}} == Sign of constrained minimum norm values == You should pay attention to a property of the current amplitudes that are given by the minimum norm method: they can be positive of negative, and they oscillate around zero. If you display the sources of the surface again, then configure the colormap to show relative values (uncheck the "Absolute values" option), you would see those typical '''stripes of positive and negative values '''around the sulci. Double-click on the colorbar after testing this to reset the colormap. . {{attachment:display_negative.gif||height="173",width="452"}} This pattern is due to the '''orientation constraint''' imposed on the dipoles. On both sides of a sulcus, we have defined dipoles that are very close to each other, but with opposite orientations. If we observe a pattern of activity on one side of a suclus that can be assimilated to an electric dipole (green arrow), the minimum norm model will try to explain it with the dipoles that are available in the head model (red and blue arrows). Because of the dipoles orientations, it translates into positive values (red arrows) on one side of the sulcus and negative on the other side (blue arrows). . {{attachment:minnorm_sketch.gif||height="155",width="467"}} When displaying the cortical maps at one time point, we are usually not interested by the sign of the minimum norm values but rather by their amplitude. This is why we always display them by default with the colormap option "'''absolute values'''" selected. However, we cannot simply discard the sign of these values because we need them for other types of analysis, typically time-frequency decompositions and connectivity analysis. For estimating frequency measures on those source maps, we need to keep the oscillations around zero. == Computing sources for multiple trials == Because the minimum norm model is linear, we can compute an inverse model independently from the recordings and apply it on the recordings when needed. We will now illustrate how to compute a shared inverse model for all the imported epochs. For illustration purpose, we will use this time an '''unconstrained''' source model. * Right-click on the '''head model''' or the '''folder '''for Run#01 > '''Compute sources [2015]'''.<

>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Unconstrained'''<

><

> {{attachment:minnorm_shared_popup.gif||height="305",width="496"}} * Because we did not request to compute and inverse model for a specific block of recordings, it computed a '''shared inverse model'''. If you right-click on this new file, you get a warning message: "Inversion kernel". It does not contain any source map, but only the inverse operator that will allow us to convert the recordings in source maps.<

><

> {{attachment:minnorm_shared_kernel.gif}} * The database explorer now shows one '''source link''' to this inverse model for each block of recordings available in this folder, single trials and averages. These links are not real files saved on the hard drive, but you can use them exactly like the first source file we calculated for the deviant average. If you load a link, Brainstorm loads the corresponding the MEG recordings, loads the inverse kernel and multiply the two on the fly before displaying it. This optimized approach saves a lot of computation time and lot of space on the hard drive.<

><

> {{attachment:minnorm_links.gif||height="197",width="534"}} * Double-click on the new link for the deviant average, to see what '''unconstrained source maps''' look like. The first obvious observation is that the maps look a lot smoother. <

><

> {{attachment:minnorm_unconstr.gif||height="152",width="459"}} * We have to be careful with the visual comparisons of constrained and unconstrained source maps displayed on the cortex surface, because they are very different types of data. In unconstrained source maps, we have '''three dipoles with orthogonal orientations at each cortex location''', therefore we cannot represent at once all the information. To display them as an activity map, Brainstorm computes the '''norm of the vectorial sum of the three orientations at each vertex'''. <

><

> {{attachment:minnorm_unconstr_sketch.gif||height="158",width="476"}} == Source map normalization [TODO] == The current density values returned by the minimum norm method have a few problems: * They depend a lot on the SNR of the signal, which may vary a lot between different subjects. * Their amplitude is therefore difficult to interpret directly. * The values tend to be always higher at the surface of the brain (close to the sensors). * The maps are sometimes patchy and difficult to read. Normalizing the current density maps with respect with a baseline (noise recordings or resting state) can help with all these issues at the same time. Some normalizations can be computed independently from the recordings, and saved as part of the linear source model (dSPM, sLORETA, MNp). An other way of proceeding is to compute a Z-score baseline correction from the current density maps. All the normalizations options do not change your results, they are just different ways at looking at the same minimum norm maps. If you look at the time series associated with one source, it would be exactly the same for all the normalizations, except for a scaling factor. What changes is only the relative weights between the sources, and these weights do not change over time. ==== dSPM, sLORETA, MNp ==== * In Run#01, right-click on the average recordings for the '''deviant''' stim > '''Compute sources [2015]'''.<

>Select successively the three normalization options: dSPM, sLORETA, MNp ('''unconstrained''').<

><

> {{attachment:minnorm_normfiles.gif}} * Double-click on all of them to compare them: <

><

> {{attachment:minnorm_normalized.gif||height="264",width="374"}} * '''Current density maps''': Tends to highlight the top of the gyri and the superficial sources. * '''dSPM''': Tends to correct that behavior and may give higher values in deeper areas. * '''sLORETA''': produces a very smooth solution where all the potentially activated area of the brain (given to the low spatial resolution of the source localization with MEG/EEG) is shown as connected, regardless of the depth of the sources. * '''MNp''': ??? ==== Z-score ==== * The '''Z-transformation''' converts the current density values to a score of deviation from a baseline. For each source separately we define a baseline, compute the average and standard deviation for this segment. Then for every time point we subtract the baseline average and divide by the baseline standard deviation. It tells how much a value deviates from the average at rest (in number of times the standard deviation). * Drag and drop the unconstrained current density maps (the only link) to the Process1 list. * Run process "'''Standardize > Z-score normalization'''", baseline = '''[-100,-2]ms.''' <

>Select the option "'''Use the norm of the three orientations'''", if not it would compute the Z-score normalization separately for each direction and then take the norm of the three Z-scored orientation for display, which doesn't make a lot of sense.<

> The option "dynamic" offers an optimization in the storage of the Z-scored file that can save a lot of disk space in some cases, but that is compatible with the "norm" option for unconstrained sources.<

><

> {{attachment:zscore_process.gif||height="328",width="524"}} * The option "absolute values / norm of the three orientations" caused the call to the intermediate process "'''Sources > Unconstrained to flat map''': norm". This is why the comment of the output file includes the tag "'''norm'''". <

>Note that this process brought down the number of signals in the file from 45,000 (the number of dipoles) to 15,000 (the number of grid points). There is now only one normalized value for each vertex of the cortex surface. {{attachment:zscore_cortex.gif||height="143",width="408"}} * If the baseline and the active state are not in the same file, you can use the '''Process2 tab''': place the baseline in the left list (Files A) and the file to normalize in the right list (Files B). ==== Delete your experiments ==== * Select all the normalized source maps (everything but the link) and '''delete''' them. <

><

> {{attachment:delete_norm.gif||height="134",width="447"}} ==== Typical recommendations ==== * Use non-normalized current density maps for: * Computing shared kernels applied to single trials. * Averaging single trials across MEG runs. * Computing time-frequency decompositions or connectivity measures on the single trials. * Use normalized maps (dSPM, sLORETA, MNp, Z-score) for: * Estimating the sources for an average response. * Exploring visually the ERP/ERF at the source level. * Normalizing the subjects averages before a group analysis. == Averaging in source space == * First compute the same source model for the the second acquisition run.<

>In Run#02, right-click on the '''head model''' or the '''folder '''> '''Compute sources [2015]'''.<

>Select the options: '''Minimum norm''' imaging, '''Current density''' map, '''Unconstrained'''<

><

> {{attachment:minnorm_run02.gif||height="227",width="240"}} * Now we have the source maps available for all the trials, we can '''average them in source space''' across runs. This allows us to average MEG recordings that were recorded with different head positions (in this case Run#01 and Run#02 have different channel files so they could potentially have different head positions preventing the direct averaging at the sensor level). * Thanks to the linearity of the minimum norm model, the two following approaches are equivalent: * Averaging the sources of all the individual trials across runs, * Averaging the sources for the sensor averages that we already computed for each run. For non-standardized current density maps: MN(Average(trials)) = Average(MN(trials)) * The second solution is a lot faster because it needs to read 4 files (one file per run and per condition) instead of 480 (total number of trials in the two runs). * Drag and drop to the Process1 tab the average recordings for '''Run01 '''and '''Run02''', then press the ['''Process sources'''] button on the left. * Run process "'''Average > Average files'''": Select "'''By trial group (subject average)'''". <

>The options "trial group" average together the files that have similar comments.<

><

> {{attachment:average_process.gif||height="446",width="449"}} * The two averages that are produced (one for each condition) are saved in the folder '''Intra-subject'''. This is where all the files that were computed using information from multiple folders within the same subject are going to be saved. If you prefer to have them in different folders, you can always move create new folders and move them there. <

><

> {{attachment:average_files.gif}} * Clear the Process1 list, then drag and drop the new averages in it.<

>Run process "'''Standardize > Z-score normalization'''", baseline = '''[-100,-2]ms.''' <

>Select the option "'''Use the norm of the three orientations'''"<

><

> {{attachment:average_zscore.gif||height="301",width="487"}} * Double-click on the source averages to display them (standard=top, deviant=bottom). <

>You can the average recordings of one or the other run for adding a time reference. <

>Use the pre-defined view "Left, Right" for looking at the two sides at once (shortcut: "7").<

><

> {{attachment:average_display_zscore.gif|average_display.gif|height="269",width="547"}} * Note that opening the source maps can be very long because of the online filters. Check in the '''Filter''' '''tab''', you may have a''' '''filter applied with the option "'''Filter full source files'''" selected. In the case of averaged source maps, the 15,000 source signals are filtered on the fly when you load a source file. This can take a significant amount of time, consider unchecking this option if the display is too slow on your computer. <

><

> {{attachment:filter_sources.gif||height="191",width="202"}} <

>dSPM(Average(trials)) = '''sqrt(Ntrials)''' * Average(MinNorm(trials)) When computing the average of dSPM or other normalized values, we have to also multiply the average with the square root of the number of files averaged together. To illustrate this, we can compute dSPM values of the averages for each run, and then average the two runs together. * Drag and drop to the Process1 tab the average recordings for '''Run01 '''and '''Run02''', then press the ['''Process recordings'''] button on the left. * Select process "'''Sources > Compute sources [2015]'''"<

>Select the option "'''Kernel only: One per file'''", then click on the ['''Edit'''] button.<

>Select Method='''Minimum norm''', Measure='''dSPM''', Source model='''Unconstrained'''.<

>Do NOT run the process immediately.<

><

>{{attachment:dspm_process.gif}} * Add process "'''Average > Average files'''": Select "'''By trial group (subject average)'''". <

>Select the option "'''Adjust normalized source maps for SNR increase'''".<

><

>{{attachment:dspm_average.gif}} * At the end of the computation, the report viewer shows the warning: "Averaging normalized maps (dspm): scaling the values by 1.414 to match the number of trials averaged (39 => 78)". Nothing went wrong, this is just to make it clear that the final averaged values have been scaled. * Rename the two last files to remember that they correspond to dSPM averages, not current maps. <

><

>{{attachment:dspm_rename.gif}} * Double-click on the files to display them.<

><

>{{attachment:dspm_display.gif}} <

><

> {{attachment:contact_options.gif||height="209",width="128"}} {{attachment:contact_image.gif||height="250",width="405"}} * '''Movies''': Right-click on any figure > Snapshot > '''Movie (time): All figures''' (click to download video) {{attachment:video_options.gif||height="263",width="126"}} [[http://neuroimage.usc.edu/wikidocs/average_sources_unconstr.avi|{{attachment:video_vlc.gif|http://neuroimage.usc.edu/wikidocs/average_sources_unconstr.avi|height="262",width="409"}}]] <

>Click on the button ['''Show details'''] to bring up all the advanced minimum norm options. . {{attachment:minnorm_details.gif||height="503",width="504"}} === Depth weighting === The minimum-norm estimates have a bias towards superficial currents. This tendency can be alleviated by adjusting these parameters. To understand how to set these parameters, please refer to the MNE manual (options --depth, --weightexp and --weightlimit). === Noise covariance regularization === Explain the influence of this parameter. * '''Automatic shrinkage''': * '''Regularize noise covariance''': Regularize the noise-covariance matrix by the given amount for each type of sensor individually (value is restricted to the range 0...1). For more information, please refer to the MNE manual, section 6.2.4 (options --magreg, --gradreg and --eegreg). * '''Diagonal noise covariance''': * '''None''': === Signal-to-noise ratio === Explain the influence of this parameter. * '''RMS source amplitude''': * '''Use fixed SNR''': * '''Estimate SNR from data''': === Output mode === * '''Full results''': Saves in one big matrix the values of all the sources (45,000) for all the time samples (361). The size in memory of such a matrix is about 130Mb for 600ms of recordings. This is still reasonable, so you may use this option in this case. But if you need to process longer recordings, you may face "Out of memory" errors in Matlab, or fill your hard drive quickly. * '''Kernel only''': Saves only the linear inverse operator, a model that converts sensor values into source values. The size of this matrix is: number of sources (45000) x number of MEG sensors (274). The multiplication with the recordings is done on the fly by Brainstorm in a transparent way. For long recordings or numerous epochs, this form of compact storage helps saving a lot of disk space and computation time, and it speeds up significantly the display. Always select this option when possible. * Full results [45000x361] = Inverse kernel [45000x274] * Recordings [274x361] === Dipole modeling === Explain the method briefly. * '''Best dipole fit''': * '''Goodness-of-fit map''': * '''Chi-square error map''': * '''NP performance index''': === LCMV beamformer === Explain the method briefly. * '''Beamformer time-series''': * '''Beamformer power''': * '''Neural activity index''': * '''NP performance index''': <

>[[http://www.ncbi.nlm.nih.gov/pubmed/10798392|Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity]]. Neuron 2000 Apr, 26(1):55-67 * Pascual-Marqui RD, [[http://www.ncbi.nlm.nih.gov/pubmed/12575463|Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details]], Methods Find Exp Clin Pharmacol 2002, 24 Suppl D:5-12 == Additional discussions on the forum == * Forum: Minimum norm units (pA.m): [[http://neuroimage.usc.edu/forums/showthread.php?1246-Doubt-about-current-density-units-pA.m-or-pA-m2|http://neuroimage.usc.edu/forums/showthread.php?1246]] * Forum: Imaging resolution kernels: http://neuroimage.usc.edu/forums/showthread.php?1298 * Forum: Spatial smoothing of sources: http://neuroimage.usc.edu/forums/showthread.php?1409 * Forum: Units for dSPM and sLORETA: [[http://neuroimage.usc.edu/forums/showthread.php?1535-Dipole-strength-units-for-dSPM-and-sLORETA|http://neuroimage.usc.edu/forums/showthread.php?1535]] * Forum: EEG reference: http://neuroimage.usc.edu/forums/showthread.php?1525#post6718 * Forum: Sign of the MNE values: http://neuroimage.usc.edu/forums/showthread.php?1649#post7014 * Forum: Combining magneto+gradiometers: http://neuroimage.usc.edu/forums/showthread.php?1900 * Forum: Residual ocular artifacts: http://neuroimage.usc.edu/forums/showthread.php?1272 <)>> <