Tutorial 28: Connectivity

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Hossein Shahabi, Mansoureh Fahimi, Francois Tadel, Esther Florin, Sergul Aydore, Syed Ashrafulla, Elizabeth Bock, Sylvain Baillet

Introduction

During the past few years, the research focus in brain imaging moved from localizing functional regions to understanding how different regions interact together. It is now widely accepted that some of the brain functions are not supported by isolated regions but rather by a dense network of nodes interacting in various ways.

Brain networks (connectivity) is a recently developed field of neuroscience which investigates interactions among regions of this vital organ. These networks can be identified using a wide range of connectivity measures applied on neurophysiological signals, either in time or frequency domain. The knowledge provides a comprehensive view of brain functions and mechanisms.

This module of Brainstorm tries to facilitate the computation of brain networks and the representation of their corresponding graphs. Figure 1 illustrates a general framework to analyze brain networks. Preprocessing and source localization tasks for neural data are thoroughly described in previous sections of this tutorial. The connectivity module is designed to carry out remained steps, including the computation of connectivity measures, and statistical analysis and visualizations of networks.

[ATTACH]

General terms/considerations for a connectivity analysis

Sensors vs sources: The connectivity analysis can be performed either on sensor data (like EEG, MEG time series) or reconstructed sources.

Nature of the signals:

Point-based connectivity vs. full network: Most of connectivity functions provide you the option to either compute the connectivity between one point (channel) and the rest of the network (1 x N) or the entire network (N x N). While the later calculate the graph thoroughly, the first options enjoy a faster computation and it is more useful when you are interested in the connectivity of an ROI with the other regions of the brain.

Temporal resolution: Connectivity networks can be computed in two ways; static and dynamic. In Table1 metrics are classified based on this feature. Dynamic networks can present the time-varying property of the brain. In contrast, the static graphs illustrate a general … which is also helpful in many conditions. The user needs to decide which type of network is more informative for their study.

Time-frequency transformation:

Consider how to choose window (length and overlap) depends on frequency bands

Output data structure:

Consequently, computed connectivity matrices in this toolbox can have up to four dimensions; channels x channels x frequency bands x time.

Simulated data (AR model)

In order to compare different connectivity measures, we use simulated data with known ground truth. Three channels are

[ATTACH]

Coherence (FFT-based)

[ATTACH]

In general, after running the connectivity processes, you can find a multi-dimensional matrix of connectivity in the database. In order to represent this matrix, there are several options.

Right click on the file and select Power spectrum and Display as image These two figures are plotted here. The right

[ATTACH]

[ATTACH]


Similarly, we can run this process and select "imaginary coherence". which gives us the following representation,

[ATTACH]

[ATTACH]

Granger Causality

Granger Causality (GC) is a method of functional connectivity, adapted by Clive Granger in the 1960s, but later refined by John Geweke in the form that is used today. Granger Causality is originally formulated in economics but has caught the attention of the neuroscience community in recent years. Before this, neuroscience traditionally relied on stimulation or lesioning a part of the nervous system to study its effect on another part. However, Granger Causality made it possible to estimate the statistical influence without requiring direct intervention (ref: wiener-granger causality a well-established methodology).

Granger Causality is a measure of linear dependence, which tests whether the variance of error for a linear autoregressive model estimation of a signal (A) can be reduced when adding a linear model estimation of a second signal (B). If this is true, signal B has a Granger Causal effect on the first signal A, i.e., independent information of the past of B improves the prediction of A above and beyond the information contained in the past of A alone. The term independent is emphasized because it creates some interesting properties for GC, such as that it is invariant under rescaling of A and B, as well as the addition of a multiple of A to B. The measure of Granger Causality is nonnegative, and zero when there is no Granger causality(Geweke, 1982).

The main advantage of Granger Causality is that it is an asymmetrical measure, in that it can dissociate between A->B versus B->A. It is important to note however that though the directionality of Granger Causality is a step closer towards measuring effective connectivity compared to symmetrical measures, it should still not be confused with “true causality”. Effective connectivity estimates the effective mechanism generating the observed data (model based approach), whereas GC is a measure of causal effect based on prediction, i.e., how well the model is improved when taking variables into account that are interacting (data-driven approach) (Barrett and Barnett, 2013). The difference with causality is best illustrated when there are more variables interacting in a system than those considered in the model. For example, if a variable C is causing both A and B, but with a smaller delay for B than for A, then a GC measure between A and B would show a non-zero GC for B->A, even though B is not truly causing A (Bressler and Seth, 2011).

Coherence and envelope (Hilbert/Morlet)

This process

[ATTACH]

Simulated data (phase synchrony)

Correlation

The correlation is the basic approach to show the dependence or association among two random variables or MEG/EEG signals. While this method has been widely used in electrophysiology, it should not be considered as the best technique for finding the connectivity matrices. The correlation by its nature fails to alleviate the problem of volume conduction and cannot explain the association in different frequency bands. However, it still can provide valuable information in case we deal with a few narrow-banded signals.

[ATTACH]

Phase locking value

Method selection and comparision

a

Advanced

Granger Causality - Mathemathical Background

Even though GC has been extended for nonlinear, multivariate and time-varying conditions, in this tutorial we will stick to the basic case, which is a linear, bivariate and stationary model defined in both the time and spectral domain. In the time domain, this can be represented in the following way. If x represents a signal that can be modeled using a linear autoregressive model estimation (AR model) in the following two ways: x(t)=∑_(k=1)^p▒〖[A_k x(t-k)]〗+e_1 x(t)=∑_(k=1)^p▒〖[A_k x(t-k) 〗+B_k y(t-k)] +e_2 Where p represents the amount of past information that will be included in the prediction of the future sample and is called the model order. In these two equations, the first models x using the past (and present) of only itself whereas the second includes the past (and present) of a second signal y. Note that when only past measures of signals are taken into account (k≥1), the model ignores simultaneous connectivity, which makes it less susceptible to volume conduction (Cohen, 2014).

Additional documentation

References

Articles

Forum discussions








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Connectivity (last edited 2019-05-01 11:39:28 by ?Hossein Shahabi)