Tutorial 28: Connectivity

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Hossein Shahabi, Raymundo Cassani, Takfarinas Medani

Cognitive and perceptual functions are the result of coordinated activity of functionally specialized regions in the brain. Brain connectivity investigates how these different regions (or nodes) interact as a network, with the goal of having a better understanding of how the brain processes information. Depending on which connectivity characteristic is studied, a distinction is made between structural (fiber pathways), functional (non-directed statistical dependency) and effective (causal interaction) connectivity between regions. Effective connectivity is often referred as directed functional connectivity. In this tutorial we will see how to compute different connectivity metrics for non-directed and directed functional analyses in Brainstorm, first with simulated data and later with real data.

General considerations in connectivity analysis

Connectivity analyses are commonly performed by computing a bivariate connectivity metric for all the possible pairs of time series or signals. The result of such approach can be presented as a connectivity graph (left image), where each signal is represented as a node, and the value of the connectivity metric is the value of the edge between the corresponding nodes. This graph representation becomes overwhelming when too many nodes are considered, as such, the connectivity graph can be represented with its connectivity matrix, aka adjacency matrix (right image).

cnx_graph_matrix.png

Sensors or sources: The signals used for the connectivity analysis can be derived from the sensor data (EEG/MEG signals) or from the reconstructed sources (voxels or scouts).

Directed and non-directed: The direction of the interaction between signals (as statistical causation) can be measured with directed metrics. However, this is not possible with non-directed metrics, as result, the connectivity metric "from Signal $$x$$ to Signal $$y$$ " is equal to the connectivity metric "from Signal $$y$$ to Signal $$x$$".

Recording condition: While connectivity analysis can be performed on resting-state (spontaneous) and event-related (trials) recordings, the appropriate connectivity method depends on the recording condition.

Full network vs point-based connectivity: In full network, the connectivity metric is computed for all the possible node pairs in the network (N×N approach), and gives as result a detailed connectivity graph. Alternatively, the analysis can be performed solely between one node (aka seed) and the rest of the nodes in the network (1×N approach), this approach is faster to compute and is more useful when you are interested in the connectivity of a specific sensor or source.

Temporal resolution: Connectivity analyses can be performed in two ways: static and dynamic. Time-varying networks can present the dynamics of brain networks. In contrast, the static graphs illustrate a general perspective of brain connectivity which is helpful in specific conditions. Users need to decide which type of network is more informative for their study.

Time-frequency transformation: Several connectivity metrics rely on the time-frequency representation of the data, which is obtained with approaches such as the short time Fourier transform, Hilbert transform, and Morlet wavelet.

Simulated data (MAR model)

To compare different connectivity metrics, we use simulated data with known ground truth. Consider three channels constructed using the following multivariate autoregressive (MAR) process of 4th order.

\begin{eqnarray*}
x_1(n) & = & \sum_{k=1}^{4} A_{(1,1,k)}x_1(n-k) + e_1(n) \\
x_2(n) & = & \sum_{k=1}^{4} A_{(2,2,k)}x_2(n-k) + e_2(n) \\
x_3(n) & = & \sum_{k=1}^{4} A_{(3,3,k)}x_3(n-k) + \sum_{k=1}^{d} A_{(1,3,k)}x_1(n-k) + e_3(n)\\
\end{eqnarray*}

where $$A_{(i,i,:)}$$ with $$i = 1, 2 \textrm{ and } 3$$ are coefficients of 4th order all-pole filters.

To compute these coefficients, we can consider a frequency response with desired pole and zero locations and use MATLAB zp2tf function for finding them.

Here, these coefficients were calculated in a way that the first channel has a dominant peak in the beta band (25 Hz), the second channel shows the highest power in the alpha band (10 Hz), and the third channel a similar level of energy in both bands. Additionally, the signal in the third channel is influenced by the signal in the first channel by the filter $$A_{(1,3,:)}$$.

We simulate data using the ARfit process. To run that, first clear the process panel and then select simulate -> simulate AR signals (ARfit) and use the following coefficients

box for code Screenshot

For a MAR model, the transfer function (or frequency response) is defined as:

$$H_{i,j}(f) = \frac{1}{A_{i,j}(f)}$$\\

[ATTACH]

The diagonal elements show the auto-transfer function, which in our specific case is the spectrum of the signals. The off-diagonal terms represent the interactions between different signals. Here, we see the transfer function from channel 1 to channel 3. These transfer functions are our ground truth for connectivity values.

In the next sections we will compute different connectivity metrics for these simulated signals. As such, place the simulated data in the Process1 tab, select recordings, click on [Run] ( https://neuroimage.usc.edu/moin_static198/brainstorm1/img/iconRun.gif ) to open the Pipeline editor, and select the connectivity metric.

Correlation

Correlation is a non-directed connectivity metric that can be used to show similarity, dependence or association among two random variables or signals. While this metric has been widely used in electrophysiology, it should not be considered the best technique to evaluate connectivity. The correlation metric by its nature fails to alleviate the problem of volume conduction and cannot explain the association in different frequency bands. However, it still can provide valuable information in case we deal with a few narrow-banded signals.

Let's compute the correlation for the simulated signals. Select the Connectivity » Correlation NxN process.

gui_corr1n.png

Process options

Result visualization

After running a N×N connectivity process, the results are stored as a N×N connectivity file (with the icon https://neuroimage.usc.edu/moin_static198/brainstorm1/img/iconConnectN.gif ). Right-click on this file to see its display options:

In Display as image, the value of the connectivity metric between a signal and itself plotted as zero so that it doesn't force scaling the colormap to 1 if the other values are much smaller.

Coherence

Coherency or complex coherence, $$C_{xy}(f)$$, is a complex-valued metric that measures of the linear relationship of two signals in the frequency domain. And, its magnitude square coherence (MSC), $$C^2_{xy}(f)$$, often referred to as coherence, measures the covariance of two signals in the frequency domain. For a pair of signals $$x(t)$$ and $$y(t)$$, with spectra $$X(f)$$ and $$Y(f)$$, the MSC is defined as:

Two related measures, which alleviate the problem of volume conduction, are ?imaginary coherence, $IC_{xy}(f)$, and the ?lagged coherence, $LC_{xy}(f)$, which are defined as:

where $$\mathrm{Im()}$$ and $$\mathrm{Re()}$$ describe the imaginary and real parts of a complex number.

To calculate coherence values in Brainstorm, select the Connectivity » Coherence NxN process.

gui_cohere1n.png

Process options

Result visualization

Coherence is a function of frequency, as such, for each frequency point there is a connectivity graph and a connectivity matrix. Right-click on the coherence result file to see its display options:

By clicking on the spectral representation of the coherence we change the frequency that is displayed in the connectivity graph and matrix. This frequency can be also changed in the Time window.

res_cohere1n.png

In the same way, we can compute the other types of coherence. The figure below presents the spectra for the imaginary coherence (left) and the lagged coherence (right).

res_cohere1n_b.png

We see the last two measures are similar but have different values in several frequencies. However, both imaginary and lagged coherence are more accurate than coherence.

Granger Causality

Granger causality (GC) is a method of directed functional connectivity, which is base on the Wiener-Granger causality methodology. GC is a measure of linear dependence, which tests whether the prediction of signal A (using a linear autoregressive model) is improved by adding signal B (also using a linear autoregressive model). If this is true, signal B has a Granger causal effect on the first signal A. In other words, independent information of the past of signal B improves the prediction of signal A obtained with the past of signal A alone. GC is nonnegative, and zero when there is no Granger causality. As only the past of the signals is considered, the GC metric is directional. The term independent is emphasized because it creates some interesting properties for GC, such as, that it is invariant under rescaling of A and B, as well as the addition of a multiple of A to B. See Granger causality - mathematical background for a complete formulation of the method.

Despite the name, Granger causality indicates directionality but not true causality.
For example, if a variable C is causing both A and B, but with a smaller delay for B than for A, then a GC measure between A and B would show a non-zero GC for B->A, even though B is not truly causing A (Bressler and Seth, 2011).

To compute the Granger causality values in Brainstorm, select the Connectivity » Bivariate Granger causality NxN process.

gui_granger1n.png

Process options

Result visualization

The GC results with the simulates signals are below. As GC metric is not symmetric, the connectivity matrix (right) is not symmetric. The upper right element of this matrix shows there is a signal flow from channel 1 to channel 3. In the connectivity graph the directionality is shown as GRADIENT (TO BE UPDATED WITH THE NEW GRAPH LIBRARY).



[ATTACH]

Spectral Granger causality

GC lacks of resolution in the frequency domain, as such, spectral Granger causality was introduced in [REF]. This metric is found in Connectivity » Bivariate Granger causality NxN.

gui_spgranger1n.png

Process options

With respect to GC, spectral GC presents two extra parameters:

Result visualization

As with coherence, spectral GC can be plotted as a function of frequency. The plot below clearly shows a peak at 25 Hz, as expected.

res_spgranger1n.png

Envelope Correlation (2020)

In the time-frequency tutorial the Morlet wavelets and Hilbert transform were introduced methods to decompose signals in the time-frequency (TF) domain. The result in this TF transformation can be seen as a set of ?analytic signals associated with narrowband (defined by the TF transformation method) signals. The analytic signal is a complex temporal representation of a real signal that has been useful in signal processing due to its characteristics, more specifically, its module and phase correspond to the instantaneous amplitude (or envelope) and instantaneous phase of the associated real signal.

The instantaneous amplitude (or envelope) of these band analytic signals can be used to carry out the pairwise connectivity analysis with metrics such as correlation and coherence (including lagged coherence). In computing the envelope correlation, an optional step is to orthogonalize the envelopes by removing their real part of coherence before the correlation. This orthogonalization process alleviates the effect of volume conduction in MEG/EEG signals. These connectivity metrics can be computed with the Connectivity » Envelope Correlation N×N [2020] process.

[ATTACH]

Process options

Phase locking value

An alternative class of connectivity metrics considers only the relative instantaneous phase between the two signals, i.e., phase-locking or synchronization (Tass et al., 1998). Phase locking is a fundamental concept in dynamical systems that has been used in control systems (the phase-locked loop) and in the analysis of nonlinear, chaotic and non-stationary systems. Since the brain is a nonlinear dynamical system, phase locking is an appropriate approach to quantifying connectivity. A more pragmatic argument for its use in studies of LFPs, EEG, and MEG is that it is robust to fluctuations in amplitude that may contain less information about interactions than does the relative phase (Lachaux et al., 1999; Mormann et al., 2000).

The most commonly used phase connectivity metric is the phase-locking value (PLV), which is defined as the length of the average vector of many unit vectors whose phase angle corresponds to the phase difference between two signals (Lachaux et al., 1999; Mormann et al., 2000). If the distribution of the phase difference between the two signals is uniform, the length of such an average vector will be zero. Conversely, if the phases of the two signals are strongly coupled, the length of the average vector will approach unity. For event-related studies, we would expect the marginal to be uniform across trials unless the phase is locked to a stimulus. In that case, we may have nonuniform marginals which could in principle lead to false indications of phase locking. between two signals.

PHASE TIME SERIES AND ROSE PLOT

Considering a pair of narrow-band analytic signals hats1(t) and hats2(t), obtained from the FT transformation using the Hilbert transform:

\begin{eqnarray*}
PLV(t)= \left | E\left [ e^{j\Delta \phi (t)} \right ] \right | \\
\end{eqnarray*}

with:

\begin{eqnarray*}
\Delta \phi (t) = \phi_1 (t) - \phi_2 (t) = arg\left ( \frac{z_{1}(t)z_{2}^{*}(t)}{\left | z_{1}(t) \right |\left | z_{2}(t) \right |} \right ) \\
\end{eqnarray*}

[ATTACH]

Process options

Result visualization

IMAGE

Method selection and comparison

We can have a comparison between different connectivity functions. The following table briefly does this job.

[ATTACH]

Comparing different approaches with the ground truth we find out that the HCorr function works slightly better than other coherence functions.

[ATTACH]

Sensor- and source-domain connectivity analyses

As mentioned above, the different connectivity metrics can be computed with sensor or source data. However, sensor-domain connectivity analyses present two important limitations:

  1. They are not interpretable, as the relation between the estimated connectivity and the underlying neuroanatomy is not straightforward.

  2. Sensor data is severely corrupted by effects of field spread and volume conduction. Due to these effects the activity of a single brain area could cause a spurious connectivity between MEG/EEG sensors.

Despite these limitations, sensor-domain connectivity analysis is commonly used. One approach to reduce the negative impact of field spread on connectivity analyses is to perform them in the source domain. In addition, source-domain connectivity analyses are interpretable as neuroanatomy is considered. As consequence, findings in this domain can be easily used in group studies using a normalization / registration. Regardless of the domain, it is highly recommended to have a clear hypothesis to test before starting the connectivity analysis. Although sensor- and source-domain connectivity analyses use different assumptions, the outcomes regarding the topology of the underlying networks should be consistent [Lai 2018].

Depending on the source estimation method the number of obtained sources can be in the order of tens of thousands, making the full connectivity analyses (N×N) impractical. As such, brain sources are often grouped in regions of intrests ROIs (scouts in Brainstorm). The most critical step in performing a source-domain connectivity analysis is the definition of ROIs, this is not a trivial procedure as it depends on the source estimation method, experimental task, and data available [Schhoffen 2009]. Common approaches found in the literature to select the ROIs for connectivity analysis are:

Being an exploratory analysis, the full-brain connectivity analysis can help to get a better understanding of the acquired data, and to develop hypothesis to test. However its outcomes should not be considered conclusive, as they may be the result of circular analysis. [Kriegeskorte 2009].

The optimal selection of ROIs to perform source-domain connectivity analysis is still an open question.

Connectivity measure on real data : MEG/EEG data

Let's go back to our auditory oddball dataset. According to the literature, we expect to observe at least the following effects in their respective areas of the brain:

  1. From 0 to 150 ms: we expect the bilateral activity in the primary auditory cortex (P50, N100), in both experimental conditions (standard and deviant beeps).

  2. From 100 to 300 ms: Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.

  3. From 300 to 550 ms: Frontal regions activation related to the decision making and motorpreparation, for the deviant beeps only (after 300ms).

In the Scouts tutorial, we have created 4 scouts for these regions of intrestes: A1L and A1R for the left and right primary cortices, IFGL for the left inferior frontal gyrus, and M1L for the left primary motor cortex.

For this data we select three main time windows to compute the connectivity:

time 1 : 0-150 ms : we expect the bilateral response in the primary auditory cortex (P50, N100), in both experimental conditions (standard and deviant beeps).

time 2 : 100-300 ms: Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.

time 3 : 300-550 ms : Frontal regions activation related to the decision making and motorpreparation, for the deviant beeps only (after 300ms).

The computation are done here only for the second recording.

Connectivity is computed at the source points (dipole) or at a defined brain region also called scouts. The signal used art this level is obtained from the inverse problem process, in which each source-level node (dipole) is assigned with an activation value at each time point.

TODO : Connectivity measure on real data : MEG/EEG data

For all the brain imaging experiments, it is highly recommended to have a clear hypothesis to test before starting the analysis of the recordings. With this auditory oddball experiment, we would like to explore the temporal dynamics of the auditory network, the deviant detection, and the motor response. According to the literature, we expect to observe at least the following effects:

For this data we select three main time windows to compute the connectivity:

time 1 : 0-150 ms : we expect the bilateral response in the primary auditory cortex (P50, N100), in both experimental conditions (standard and deviant beeps).

time 2 : 100-300 ms: Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.

time 3 : 300-550 ms : Frontal regions activation related to the decision making and motorpreparation, for the deviant beeps only (after 300ms).

The computation are done here only for the second recording.

Sources level

Connectivity is computed at the source points (dipole) or at a defined brain region also called scouts. The signal used art this level is obtained from the inverse problem process, in which each source-level node (dipole) is assigned with an activation value at each time point.

Therefore, we can compute the connectivity matrix between all pairs of the node. This process is possible only of the inverse problem is computed (ref to tuto here).

To run this in brainstorm, you need to drag and drop the source files within the process1 tab, select the option 'source process' click on the Run button, then you can select the connectivity measure that you want to perform.

As in the previous section, we can compute the source connectivity matrix for each trail, then average overall trial. However, this process is time and memory consuming. For each trial, a matrix of 15002x15002 elements is computed and saved in the hard disc (~0.9 Go per trial). In the case of the unconstrained source, the size is 45006x45006.

[ATTACH]

This is obviously a very large number and it does not really make sense. Therefore, the strategy is to reduce the dimensionality of the source space and adopt a parcellation scheme, in other terms we will use the scouts. Instead, to compute the connectivity value values between two dipoles, we will use a set of dipoles pairs that belong to a given area in the cortex.

Although the choice of the optimal parcellation scheme for the source space is not easy. The optimal choice is to choose a parcellation based on anatomy, for example the Brodmann parcellation here. In brainstorm these atlases are imported in Brainstorm as scouts (cortical regions of interest), and saved directly in the surface files as explained in this tutorial here.

In this tutorial, we will use the scouts " Destrieux atlas" (following figure) [ATTACH]

To select this atlas, from the connectivity menu, you have to check the box 'use scouts', select the scout function 'mean' and apply the function 'Before', save individual results.

In this tutorial, we select the correlation as example, the same process is expected for the other methods.

[ATTACH]

For more detail for these options please refer to this thread

The following matrix is the solution that we obtain with these scouts with the size of 148x148 for this atlas (~400 Ko)

[ATTACH]

For this data we select three main time windows to compute the connectivity:

time 1 : 0-150 ms : we expect the bilateral response in the primary auditory cortex (P50, N100), in both experimental conditions (standard and deviant beeps).

time 2 : 100-300 ms : Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.

time 3 : 300-550 ms : Frontal regions activation related to the decision making and motor preparation, for the deviant beeps only (after 300ms).

The computation are done here only for the second recording.


Coherence

Correlation

For this data we select three main time windows to compute the connectivity:

time 1 : 0-150 ms : we expect the bilateral response in the primary auditory cortex (P50, N100), in both experimental conditions (standard and deviant beeps).

time 2 : 100-300 ms : Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.

time 3 : 300-550 ms : Frontal regions activation related to the decision making and motor preparation, for the deviant beeps only (after 300ms).

The computation are done here only for the second recording.


This connectivity is observed between the area 99 and 41 and between the 42 and 100 areas.

Corresponding to the name of the areas here

time 3 : 300-550 ms : Frontal regions activation related to the decision making and motor preparation, for the deviant beeps only (after 300ms).

The computation are done here only for the second recording.

Coherence

Correlation

For the time 1, We find high correlation value in both hemisphere on the temporal areas.

This connectivity is observed between the area 99 and 41 and between the 42 and 100 areas.

Corresponding to: name of the areas here

Similar results are observed either for the deviant and standard sounds.

[ATTACH] [ATTACH]

For the time 3,


PLV

This connectivity is observed between the area 99 and 41 and between the 42 and 100 areas.

Corresponding to: name of the areas here

Similar results are observed either for the deviant and standard sounds.

[ATTACH] [ATTACH]

For the time 3,

PLV


Using the option > right-click on figure> Graphic Options > Display Region max M or just use from the keyboard with M key.

[ATTACH] } [ATTACH]

TODO : Sensors level

Connectivity is computed at the sensors or the electrodes levels from the recorded time series.

PLV

[ATTACH] [ATTACH]

Using the option > right-click on figure> Graphic Options > Display Region max M or just use from the keyboard with M key.


TODO : Sensors level

Connectivity is computed at the sensors or the electrodes levels from the recorded time series.

PLV

[ATTACH] [ATTACH]

Using the option > right-click on figure> Graphic Options > Display Region max M or just use from the keyboard with M key.


Advanced

TODO : discuss

- Explain or give more information about the methods and how to choose the best parameters

ex: plv better with 100 samples & narrow bands

Using the option > right-click on figure> Graphic Options > Display Region max M or just use from the keyboard with M key.

[ATTACH] [ATTACH]

TODO : discuss

- Explain or give more information about the methods and how to choose the best parameters

ex : plv better with 100 samples & narrow bands

- Explain the choice either with ERP or without, and why (link to the cited paper, can't find it)

- Show/add other relevant measures of statistics to separate the two conditions

- Add the option : checkbox remove the erp for PLV and CORR and PTE

- ...

Sections to add

On the hard drive

TODO: Document data storage.

Additional documentation

References

1. Reference #1

2. Reference #2

3. Reference #3

Articles

Forum discussions








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Connectivity (last edited 2021-04-30 00:31:58 by RaymundoCassani)