I’m new to functional connectivity with EEG, but quite proficient with MRI’s one.
first, could someone suggest me some bibliographic references explaining me the rationale/principles of this analysis in EEG, particularly of the algorithms implemented in brainstorm.
second, some general reasoning…i suppose that the main difference (if we refer to sources connectivity, not sensors’ one) concerns the temporal resolution. In MRI i have to provide minutes (7-10) of continuos processed data. In EEG it looks like you can provide a source reconstruction based on the ERP…
the output for a 1 (scout/vertex) X N (vertex) is always a spatial map containing correlating and anti-correlating voxels (if i do the connectivity->correlation). my question is:
are there any rules concerning the number of timepoints needed for this analysis, I mean can i statistically compare connectivity matrices of:
2-3-4 hundreds of samples (e.g. the early components of an ekoked potential) in order to calculate connectivity differences among different conditions/task
20-30 samples in order to separately analyze different components of an ERP.
I would recommend the following article as an entry point to connectivity metrics with EEG and MEG:
Schoffelen, J.-M. & Gross, J. Source connectivity analysis with MEG and EEG. Hum Brain Mapp, , 2009, 30, 1857-1865
Because of the time resolution, the choices for a connectivity metric are quite open. Another reason is that the mechanisms ruling functional connectivity between neural assemblies are still not entirely understood: hence each method corresponds to a hypothesis concerning possible mechanisms. I would suggest you first consider one possible hypothesis that you would like to test with your data.
In a nutshell, temporal correlation would be an appropriate metric for low-frequency (<5Hz) activity, which is the dominant mode in MEG/EEG source time series. The minimum number of time samples required (or trials, if the metric is computed across trials, not time) should correspond to at least a couple of cycles of the dominant low frequency in the data. If it is say, 2Hz at 600Hz sampling rate, you'd need at least 500ms of data (300 samples). The generic issue with connectivity metrics is that signals are transient and non-stationary, hence the signaling of functional connectivity is itself non-stationary, therefore people have considered evaluating connectivity stats across trials rather than time.
Phase-locking values (PLV) and coherence are metrics considering phase or amplitude change similarities between oscillatory components in source time series.
The Granger model is an attempt to evaluate causality (effective connectivity) between regions, which is supposed to be measured on non-filtered data.
All these metrics are available in Brainstorm. Before we can help further, you first need to identify which type of effects/mechanisms you wish to probe with your data and we'll move from here.
I’m sorry, I’m editing the connectivity code intensely those days… I am changing many things in the way those connectivity measures are calculated and saved. This code is not considered stable, this is why it is not documented on the website.
I just did an update that could fix some of your problems. If it doesn’t, please report the full error messages.
There is no documentation yet, the methods are still under development/evaluation.
You try to run the processes in the Connectivity section from the Process1 tab.
[1xN] estimates the connectivity measure between one signal (sensor or source) and all the others.
[NxN] estimates all the possible pairs of interactions
A more flexible [1xN] interface is available from the Process2 tab. FilesA=1, FilesB=N. This allows you to estimate the interaction between different types of signals (ex: FilesA=recordings, FilesB=sources)
Regarding the functional connectivity, is there a way to compute this analysis for deeper brain structures (Nuc. Accumbens, amygdala, etc)? The FS atlas does not include these regions. Also, I tried to create these scouts on the MRI, but I got an error saying it’s not on the cortex. Any recommendations on how to implement this?
Hi Yagna,
We are currently working on those questions, the freesurfer Aseg atlases are now imported by the menu “import freesurfer folder”. But it will probably take a few months before we release something that you can use for the connectivity measures.
Francois
Thanks Francois. Just one more question, is there a way to import the sub-cortical atlas from FS without importing the entire folder? I am already in the middle of my analysis and importing the whole folder would imply repeating my analysis on the newly imported surfaces with the atlas.
Also, when I imported the FS folder, it only seems to import the Desikan-Killany and Destrieux atlases which do not include the subcortical labels.
To import the subcortical surfaces from FreeSurfer in an existing folder:
Right-click on the subject > Import surfaces > Select the “Volume mask” file type, and pick the file mri/aseg.mgz
The Desikan-Killany and Destrieux atlases from FreeSurfer are surface-based atlases based only on the cortical surface. They are scouts on the cortex surface.
The aseg atlas defines other surfaces.
Is there a way to calculate NXN connectivity between scouts? I am looking at connectivity between 2 relatively broad regions (DLPFC and Ant. cingulate). In theory, I want to map the coherence between each node at the DLPFC and each node at the ant cingulate. I set my scout function to ALL for both scouts and extracted the time series from the cortical surface so I have a matrix of signals for each ROI. I now want to calculate connectivity with each possible permutation.
Is there a way to implement this on Brainstorm or can I workaround this with the mscohere function in MATLAB? I still want to group the coherence based on frequency bands. For my results, I’m hoping to get a 3D matrix of size N1 x N2 x 6, where N1 is the number of nodes in DLPFC, N2 is the number of nodes in the RAC and 6 is the number of frequency bands.
Thanks Francois. It works. However, I have a few follow-up questions.
When I try to display the coherence, it gives me an error stating, “Connectivity matrix is 279x58. The graph display can be used only for square matrices.” So, I exported the connectivity solution to MATLAB. The TF data is 16182x1x6 instead of 279x58x6. I am just wondering what the order of these elements is. I reshaped the matrix to 279x58x1 in MATLAB. Is that okay?
Is there a way to figure out which index number corresponds to which element or should I assume the computation was conducted in an increasing order (i.e. the first element corresponds to the coherence between the lowest index in DLPFC and the lowest index in RAC)?
Can I export the cortex indices/coordinates included in each ROI into MATLAB?
For one frequency bin (dimension 3), all the connectivity values are saved in a column vector in dimension 1 so that we can keep dimension 2 for the time (some metrics are changing in time).
Yes, use the reshape command to convert this to a N1xN2xNf matrix. If you export the file to a structure “s”:
TF = reshape(s.TF, length(s.RefRowNames), length(s.RowNames), size(s.TF,3));
The names of the scouts each row (respectively column) is indicated in the field RefRowNames (respectively RowNames).
Within a scout, the indices of the sources are the same as in the scout definition (should be sorted from lower to higher index anyways).
To export the indices of the vertices of the cortex: select the scout(s) in the “Scout” tab, and click on the menu “Scout > Export to Matlab”.
It exports the selected scouts structures to Matlab and displays at the same time the vertex indices.
The names of the scouts each row (respectively column) is indicated in the field RefRowNames (respectively RowNames).
Within a scout, the indices of the sources are the same as in the scout definition (should be sorted from lower to higher index anyways).
Francois[/QUOTE]
>>Are you sure? Because when I export the scout indices as described below, the indices aren’t listed in increasing order. Should I use the order in which the indices are exported or should I assume the coherence matrix does in fact use the increasing order of indices for both ROIs?
You’re right, in the case of the imported anatomical atlases, the indices are not sorted.
It doesn’t matter much. All the functions using scouts are using the list of vertices the way they are listed in the scout structure.
Always use the scout.Vertices list, this is what is always used for indexing the ImageGridAmp or ImagingKernel matrices.
I know the coherence (MSC) tools have changed in this latest version of Brainstorm making it more stable. The results for the same set of signals are definitely different from the older versions.
Can you explain how exactly the window length and overlap are computed since the only parameters I can change are max freq and freq resolution. Also, can you provide any guidance on how to choose the optimal parameters? I’m looking for changes in mean coherence of z-scored signals in source space at different time points throughout treatment between two scouts (DLPFC and ant. cingulate). I’m interested in the characteristic frequency bands (delta, theta, alpha, beta, gamma).
For now, I’ve set my freq resolution to 1Hz and max freq to 100Hz.
The Z-score process should not change anything in the coherence values, as it is just an amplitude scaling+centering. If you want to skip it to go faster, you can.
This coherence process is still not correctly tested and documented. It is a very noisy measure, you have to be aware that it is difficult to get meaningful results with it.
You have to understand precisely what you can expect from this measure, its limits of application and its relations with your own recordings.
You will find all the technical details and references to the measure we use directly in the code of the function brainstorm3/toolbox/connectivity/bst_cohn.m