Best: Brain Entropy in space and time

Authors: Zerouali Y, Lacourse K, Chowdhury R, Hedrich T, Afnan J, Delaire E, Lina JM, Grova C.

Are you interested in estimating the spatial extent of EEG/MEG sources?

Are you interested in localizing oscillatory patterns?

Are you interested in localizing synchronous cortical sources?

BEst.jpg

This tutorial introduces the toolbox BEst – "Brain Entropy in space and time" that implements several EEG/MEG source localization techniques within the “Maximum Entropy on the Mean (MEM)” framework. These methods are particularly dedicated to estimate accurately the source of EEG/MEG generators together with their spatial extent along the cortical surface. Assessing the spatial extent of the sources might be very important in some application context, and notably when localizing spontaneous epileptic discharges. We also proposed two other extensions of the MEM framework within the time frequency domain dedicated to localize oscillatory patterns in specific frequency bands and synchronous sources.

Introduction

BEst toolbox is the result of a collaborative work between two research teams:

ETS.png

LATIS Lab directed by Dr. Jean Marc Lina, Prof. in Electrical Engineering Dpt at Ecole de Technologie Supérieure, Montréal. Main students involved: Younes Zerouali and Karine Lacourse

MFIL.png

Multimodal Functional Imaging Lab directed by Dr. Christophe Grova, Prof. in Biomedical Engineering Dpt and Neurology and Neurosurgery Dpt, Mcgill University. Main students involved: Rasheda Chowdhury, Tanguy Hedrich, Jawata Afnan and Edouard Delaire

The package proposes three methods:

cMEM: Standard MEM with stable clustering source localization in the time-domain. The originality of this method is its ability to recover to the spatial extent of the underlying sources. To do so, the solution is estimated using a unique, but optimal, parcelization of the cortex as spatial prior. These parcels are estimated assuming brain activity to be spatially stable. (Chowdhury et al. Plos One 2013)

wMEM: wavelet based MEM dedicated to perform source localization of oscillatory patterns in the time-frequency domain. The data and the solution are expressed in a discrete wavelet expansion. The spatial prior is a parcelization of the cortex specific to each time-frequency piece of the data (Lina et al. IEEE TBME 2012).

rMEM: ridge MEM dedicated to localize cortical sources exhibiting synchrony. Continuous complex-valued wavelet analysis of the data allows detection data segments exhibiting synchrony between sensors. rMEM consists in localizing the generators of these detected synchrony patterns (Zerouali et al. IEEE TBME 2013).

See below for more information on the BEst license and the associated publications.

Principle of the “Maximum Entropy on the Mean”

The MEM solver relies on a probabilistic (Bayesian) approach where inference on the current source intensities is estimated from the informational content in the data (notion of maximum of entropy): “The maximum entropy distribution is uniquely determined as the one which is maximally noncommittal with regard to missing information” (E.T. Jaynes, Information and statistical mechanics, Phys. Rev. 106(4), 1957). The first applications of MEM in electromagnetic source localization were proposed by Clarke and Janday (The solution of biomagnetic inverse problem by maximum statistical entropy, Inv. Problem 5, 1989). A concomitant work entitled “if realistic neurophysiological constraints are imposed, then maximum entropy is the most probable solution of the EEG inverse problem” (Inv. Problem 6, 1990) was published by Rice.

The main contribution of our group consists in the prior model that we consider within the MEM regularization framework. It started with the idea of describing brain activity as being organized by cortical parcels, each parcel having the possibility of being active or not, whereas MEM offers the possibility to estimate a contrast of current density within each active parcel. This idea was first introduced within the classical MEM framework by Amblard et al. (Biomagnetic source detection by Maximum Entropy and Graphical Models, IEEE TBME, 2004), using notably data-driven approaches to estimate the cortical parcels (Lapalme et al., Data-driven parceling and entropic inference in MEG, NeuroImage, 2006). The use of such a spatial model within the MEM framework is the key idea allowing MEM to be sensitive to the spatial extent of the sources along the cortical surface (Grova et al. Evaluation of EEG localization methods using realistic simulations of interictal spikes. Neuroimage 2006). The most recent development consists in the extension of MEM within the time frequency domain in order to localize oscillatory and synchronous generators.

Best toolbox offers three different kind of information to be localized on the cortex: generators with time series stable within parcels; generators of oscillatory patterns and generators of synchrony.

MEM: dataset and preliminary steps

The detailed information about this database for source localization is provided in the following link: https://neuroimage.usc.edu/brainstorm/DatasetIntroduction

Brainstorm database

This dataset corresponds to the standard Auditory stimulation dataset provided in Brainstorm:sample_introduction.zip in https://neuroimage.usc.edu/bst/download.php.

To use this dataset, following steps should be done to prepare the data for localization in Brainstorm starting tutorials (2-16).

Launch BEst

figure1.png MEM_panel2023.png

Download the latest version of BrainEntropy now? Yes. This will download the latest version of the BrainEntropy and install it as a Brainstorm plugin.

plugin1.png

plugin2.png

Warning: please take note that the " MEM: Max entropy on the mean" option is disabled when using volumic head model. Please use a head model based on the cortical surface instead (right-click on the condition > Compute head model > Cortex surface).

The MEM option panel appears. You have the choice between 3 methods. cMEM is the standard MEM in the time domain. wMEM focuses on the oscillatory patterns by computing MEM in the wavelet domain. rMEM is dedicated in finding synchronous sources using continuous wavelets.

figure_panelMEMoptions.png

cMEM

You have clicked on cMEM. The panel then displays all the parameters associated with this method. Parameters are defined in two sets:

The left column proposes basic mandatory parameters

The right column proposes more subtle parameters, for more advanced users. To access this set of parameters, you need to click on the ‘expert’ option. The ‘OK’ will be then available once all mandatory parameters have been specified.

Using the BEst_tutorial_auditory_stimulation, we propose the following first experiment:

1. Selection of a specific time window may be useful for long data epochs (default: localize whole data window).

Suggested time window for this dataset: 0.05 to 0.15 s.

2. Three choices are available for baseline definition. cMEM will consider such a baseline to estimate the noise covariance matrix in the sensor space.

Within data: the baseline is a time window to be selected within the recordings.

Find: this option allows you to find a specific baseline present in the protocol, if you type the name. For example, if your baseline is named ‘Baseline’, type ‘Baseline’ and select find)

Import: the user can import baseline data from a file using brainstorm import tools. In any case, the user can define a time window for the baseline

Suggested baseline for this dataset:

within data, with interval -0.1 to 0 s.

3. Neighborhood order: this parameter defines the size of the parcels that will be constructed, independently in each time-frequency box. Here, we suggest using 4.

All other parameters can be kept with their default values:

figure_resultscMEM2.png

4. Hitting 'Ok' will launch cMEM estimation. At the end, a new results file appears in the database under the appropriate dataset. Simply double-click it to visualize.

We suggest to look at the sources around t = 91.7 ms.

figure_resultscMEM3.png

Parameters of cMEM

wMEM

You have clicked on 'wMEM. The panel then displays all the parameters associated with this method. Parameters are defined in two sets:

The left column proposes basic mandatory parameters.

The right column proposes more subtle parameters, for more advanced users. To access this set of parameters, you need to click on the ‘expert’ option. The ‘OK’ will be then available once all mandatory parameters have been specified.

Principle: The data is decomposed following a discrete multiresolution scheme using real Daubechies wavelets. The time-scale plane can then be threshold to keep only significant coefficients, up to a fraction of total signal power.

wMEM then localizes the sources associated to each wavelet coefficient of the time-scale plane, thus yielding an estimation of wavelet coefficients for each source. As we keep only significant oscillatory modes from the time-scale planes, this method efficiently targets the oscillating neural sources. Interestingly, the Daubechies wavelets have a stable inverse transform, which allows us to recover efficiently the time course associated to each source.

With the 'BEst_tutorial_auditory_stimulation, we propose the following first experiment (similar to cMEM previous experiment)

figure_panelwMEM.png

1. Selection of a specific time window may be useful to localize specific time-frequency components of the recordings (default: localize whole data window). Suggested time window for this dataset: 0.05 to 0.15 s.

2. Three choices are available for baseline definition.

3. 'Selection of scales to be localized. Suggested scale = 4 (corresponding to frequency band: 18.75 to 37.5 Hz).

WMEM_TF_BOX_TUT_BEST.png

This figure, that you can get by selecting MEM display in the ‘expert parameters’ column, shows the global Multi-Resolution power (mean wavelet power over all MEG sensors). Each box corresponds to a wavelet at some scale (index j) and located in time. The scale ( j ) increases with the inverse of the frequency: the lower is j, the faster the local oscillation. The darker is a box, the more power is associated with it. As expected, the evoked response recruits high frequencies between 0 and 0.2 s. With the above scale selected (j = 4, frequency: 18.75-37.5 Hz), we pick only a piece of the oscillation that composes the response in this particular time window.

4. Neighborhood order: this parameter defines the size of the parcels that will be constructed, independently in each time-frequency box. The default value is 4.

5. Hitting Ok will launch the wMEM method. At the end, a new result file appears in the database under the appropriate dataset. Simply double-click it to visualize.

figure_resultswMEM1.png

You will notice on the command window that 4 time-frequency boxes have been used (at scale j = 4 i.e. oscillations in the 18.75-37.5 Hz frequency band, peak of the wavelet located at 0.020 seconds). This oscillation is part of the short duration oscillatory pattern occurring at 91.7 msec.

Other suggestions:

figure_resultswMEM_allscales1.png

Parameters of wMEM

License

BEst is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. BEst is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should receive a copy of the GNU General Public License along with BEst. If not, get it here.

References

The references to be acknowledged for each method are:

Additional technical references:

Validation and applications:

MEM Parameters

Clustering

The major prior of the MEM is that cortical activity is compartmented into discrete parcels. The activity of a given source within a patch is thus linked to the activity of its neighbors. This step is thus important for the MEM approach and we propose some strategies to optimize it, depending on the type of activity we target for source localization. All these approaches are data-driven and consist mainly in two steps: sources scoring and labelling.

Source scoring: MSP

We first attribute a score to each source indicating its probability of explaining a segment of data. These scores are estimated using the multivariate source prelocalisation technique - MSP (Mattout et al. Neuroimage 2005). The MSP proceeds by computing a signal space projector using the SVD of the normalized data segment. The leadfields of each source are then normalized and projected on that space. The L2 norm of that projection thus gives a probability-like index between 0 and 1 weighting the contribution of each source to the data.

The MSP quantifies the probability for each source to explain some variability in a data segment. Implicitly, we expect MSP scores to discriminate active regions from inactive ones in relation to the analyzed data window. MSP aims at identifying good candidate dipolar sources contributing to the localization, with great sensitivity, but relatively low specificity. In this regard, the size of the data window on which MSP is applied is an important issue. If it is too short, the variability to be explained is low and all sources will have low scores. On the opposite, if it is too large, all sources will have high MSP scores. We thus propose three strategies to obtain appropriate MSP scores distributions:

Source labelling: spatial clustering

The aim of this step is to group cortical sources into spatially homogenous cortical parcels based on their MSP scores. The clustering algorithm selects the sources with maximal MSP score, which are considered as seed points. The parcels are then constructed using a region growing algorithm starting from those seed points. It proceeds as follows:

  1. Select the source with maximal MSP score: it is the first seed point

  2. Select all the sources around the seed point to form a cluster. The size of the cluster is determined by the 'neighborhood order' value

  3. Amongst the other sources, we select the source with maximal MSP score, which will be the next seed point

  4. We repeat steps 2-3 until all the sources belong to a cluster

MSP scores threshold

It is possible to apply a threshold on the MSP scores before applying the spatial clustering. All the sources that have MSP score below this threshold are then excluded and will not be part of any cluster and of the reconstructed sources. The threshold can be set arbitrarily from 0 to 1, 0 excluding none of the sources.

The MSP threshold can also be derived from the baseline using a FDR criterion (Lina et al. IEEE TBME 2012)

Neighborhood order

The neighborhood order determines the size of the parcels in the spatial clustering. The neighborhood order relates to the number of maximal spatial connections along the geodesic cortical surface between an element of a cluster and the seed of this cluster. Therefore, the higher the neighborhood order, the larger the cluster and the lower the number of parcels. (Chowdhury et al. Plos One 2013, page 3)

Spatial smoothing

We can apply a spatial smoothing constraint (LORETA like) within each parcel during the MEM regularization. The parameter of smoothing can vary from 0 (no spatial smoothing) to 1 (high spatial diffusion inside the parcels). (Chowdhury et al. Plos One 2013, page 3 & 4, Harrison et al. Neuroimage 2007)

Expert parameters

Model prior

For the definition of the MEM prior model i.e., spatial clustering, the distribution of source intensities within each active parcel are modeled using a Gaussian distribution with a mean (mk) and a variance (Σk). The proposed prior model consists in using a hidden variable tuning the active state of each parcel. The probability of every parcel for being active has to be initialized as well (αk). The parameters of the active parcels can be initialized using different ways.

Solver options

SPECIFIC TO wMEM:

If there exists an emptyroom datasets and the option “Use emptyroom” is selected, the wMEM computes a scale-specific covariance matrix and a scale-dependent shrinkage (if selected). In this case, only options 4 and 5 are available and take a different meaning:

o 4: diagonal scale-specific covariance matrix – different values along the diagonal

o 5: (default – forced if neither 4 nor 5) diagonal scale-specific covariance matrix – unique value along the diagonal, which is the average of the diagonal terms

Wavelet Processing

Tutorials/TutBEst (last edited 2024-03-21 17:29:41 by ?JawataAfnan)