Help for connectivity pipeline

Brainstorm questions
Dear Brainstorm Expert:
I am wondering if I may get some feedback & help for my first MEG analysis using Brainstorm? I downloaded Brainstorm last week (the latest version). The online tutorial is so helpful and I've followed them carefully, but I still need some help to achieve my goal - The ultimate goal is to look at hippocampal connectivity with the rest of the brain and compare its "connectivity difference between 2 experimental conditions (condition A and B)" between Group 1 and Group 2.

Below is my pipeline and questions. Please also let me know if each step looks fine? This is so much appreciated!

(1) Each subject's anatomy is processed through Freesurfer reconstruction pipeline, and each subject has one single run of MEG recording (CTF format). Subjects' MEG data are first filtered by 1 to 50 Hz on the continuous data, and the first & last transient time window of the continuous data (a few seconds, based on information given by the 'view filter response') were manually marked as bad segments.

(2) Stimulus delay is first corrected, then power spectrum inspected and a notch filter applied, followed by bad channel inspection & removal, and artifact removal with SSP projectors (for eye blinks and heart beats, etc.). Lastly, head motion is corrected (shifting to the center position of the run), and the entire data were inpected throughout again and marked with a few additional bad segments. Then, epochs are imported for the 2 experimental conditions (for -200ms to 600ms)=> When importing epochs, should I uncheck the "remove DC offset" option (given that I've filtered the continuous data for 1-50Hz in the beginning)?

**Below are based on the tutorial for event-related source localization and I wonder if they should still be done before I can do any connectivity analysis?

(3) Computing head model => for the Source space, should I select "Cortex surface" or the "MRI volume"? (This is regarding precise localization and connectivity estimates for deep-brain sources like hippocampus.) For the Forward modeling methods, I used "MEG: Overlapping spheres".

(4) Computing noise covariance: Compute from recordings using baseline of -200ms - 0 ms; and "Remove DC offset: Block by block, to avoid effects of slow shifts in data" =>Are these good? There is no additional noise recording available
Comuting sources: Method: "Minimum norm imaging", Measure: "Current density map"; and for Source model, I used "Unconstrained" because my focus for this dataset is on the hippocampal activity=>Would this be fine for later connectivity processing? I then normalize the density map and generated the "z-scored" source maps for each subject for each condition

(5) For connectivity analysis, now my question is which file should I drag and drop to the Processing Box1? The link to all the trials for each experimental condition (and not the averaged link), right?

(6) And, somehow the options for the type of connectivity seem not matching the tutorial documentaiton onine, so I don't know which connectivity analysis is the best for me to go for? If I am interested in examining the degree of of phase synchronisation, or something like 'lagged coherence' or 'phase lag index', which one is the one to go for I wonder? And what parameters should I use?

(7) Finally, how can I subtract connectivity between trial conditions? I assume I can then average the sum of these differences for each subject, right?
And then, how do I compute the connectivity sources? Just like the above event-related source localization? And after this, is there any normalization like normalizaing the event-related sources above, using the pre-stimulus window of -200ms - 0 ms, that should be done before group analysis?

(8) Before group analysis, I should project the connectivity sources to a standard brain template, right? I wonder how? Should I follow the tutorial of "Group Analysis : Subject coregiateration"? (But which file should I use for the projection?)

(9) Finally, I imagine I'll get something like a Group folder containing the (normalized?) and projected 1xN connectivity sources, with the connectivity difference files of [connectivity (A) - connectivity (B)] per subject, and how do I run a t-test with them between Group 1 and Group 2. And is it also possible to control for covariates (e.g. age, and IQ) in this group comparison (simple ANCOVA) and how?

(10) Regarding the ROI, is there any documentation on the approaches of how to generate it and use it for connectivity analysis? E.g., should I de-project the standard-space ROI back to individual anatomy space to do the connectivity analysis? Or this can be done just on the standard space (i.e. MNI)? But somehow in all the connectivity processing tabs, I didn't see any place for me to load the ROI file..

Thank you so much for your time. Your expert feedback and information would be greatly appreciated.

Yuwen
University of Toronto

Hello,

I switched your message to a public topic because it can be useful for other users.

Then, epochs are imported for the 2 experimental conditions (for -200ms to 600ms)

This might be a bit short for time-frequency or connectivity analysis. If you can, think maybe about increasing the duration of your epochs, pre and post stim.

When importing epochs, should I uncheck the "remove DC offset" option (given that I've filtered the continuous data for 1-50Hz in the beginning)?

Indeed, if you have removed all the components of the signal under 1Hz, you don't need to remove again the signal mean.

Below are based on the tutorial for event-related source localization and I wonder if they should still be done before I can do any connectivity analysis?

This is up to your decision, connectivity analysis can be done in sensor space or in source space.

(3) Computing head model => for the Source space, should I select "Cortex surface" or the "MRI volume"? (This is regarding precise localization and connectivity estimates for deep-brain sources like hippocampus.)

To my knowledge, there are still no tools working correctly for connectivity analysis on unconstrained source models (3 dipoles with orthogonal directions for each 3D location). Therefore, I recommend doing your source analysis with constrained locations and constrained orientations.

@hossein27en Can you confirm?

(4) Computing noise covariance: Compute from recordings using baseline of -200ms - 0 ms; and "Remove DC offset: Block by block, to avoid effects of slow shifts in data" =>Are these good?

Looks good. If you increase the baseline in your epochs, increase it here as well.

I used "Unconstrained" because my focus for this dataset is on the hippocampal activity=>Would this be fine for later connectivity processing?

No, this is not a good idea because I think the connectivity processes in Brainstorm are not well equipped to handle unconstrained sources.
Additionally, Attal et al showed that the hippocampus sources are best modelled with constrined sources:
https://neuroimage.usc.edu/brainstorm/Tutorials/DeepAtlas#Location_and_orientation_constraints

(5) For connectivity analysis, now my question is which file should I drag and drop to the Processing Box1? The link to all the trials for each experimental condition (and not the averaged link), right?

Connectivity and time-frequency analysis require long signals and fine fine signal dynamics, it is not advised to use averaged responses. Use continuous recordings or individual trials instead.

(6) And, somehow the options for the type of connectivity seem not matching the tutorial documentaiton onine, so I don't know which connectivity analysis is the best for me to go for? If I am interested in examining the degree of of phase synchronisation, or something like 'lagged coherence' or 'phase lag index', which one is the one to go for I wonder? And what parameters should I use?
(7) Finally, how can I subtract connectivity between trial conditions? I assume I can then average the sum of these differences for each subject, right?

This part is still under construction.
@hossein27 might be able to address some of your questions.

And then, how do I compute the connectivity sources?

Run your connectivity analysis either on sensor or on source signals.

And after this, is there any normalization like normalizaing the event-related sources above, using the pre-stimulus window of -200ms - 0 ms, that should be done before group analysis?

Most connectivity measures will not have any time resolution anymore.

(8) Before group analysis, I should project the connectivity sources to a standard brain template, right? I wonder how? Should I follow the tutorial of "Group Analysis : Subject coregiateration"? (But which file should I use for the projection?)

This is needed only if you want to average your results in source space.
If you goal is to compute one connectivity measure for each subject and condition and then test for the significance of the differences between the two conditions, you don't necessarily have to project your full source maps to a template.

(9) Finally, I imagine I'll get something like a Group folder containing the (normalized?) and projected 1xN connectivity sources, with the connectivity difference files of [connectivity (A) - connectivity (B)] per subject, and how do I run a t-test with them between Group 1 and Group 2. And is it also possible to control for covariates (e.g. age, and IQ) in this group comparison (simple ANCOVA) and how?

Everything you can do in terms of statistics is described in the Statistics tutorial. There are no multivariate tests available yet. You can export your subject-level results and do your statistical analysis outside of Brainstorm if needed.
https://neuroimage.usc.edu/brainstorm/Tutorials/Statistics

(10) Regarding the ROI, is there any documentation on the approaches of how to generate it and use it for connectivity analysis?

No, sorry. There are in general two strategies: starting with a strong anatomical hypothesis and designing your region of interest independently from the data, or guiding your ROI analysis based on the results you obtain.
Keep them relatively small, especially with constrained sources. Many of the ROIs available in the FreeSurfer anatomical atlases are too big and mix multiple functional regions.

E.g., should I de-project the standard-space ROI back to individual anatomy space to do the connectivity analysis?

You can design your ROIs for each subject individually, or your can design them on the template and then project them to each subject (menu Scout > Project in the Scout tab).

2 Likes

@yuwenh

I confirm.

As @Francois said, we are working on the tutorial for connectivity analysis. We might add some general guidelines for choosing a method, but it completely depends on your project. Based on what you said, I would say better start with the "lagged-coherence".

While you have had one matrix for each condition (average of matrices of all trials) and each subject, you can subtract two conditions and then have a difference matrix for each subject and you can do statistical analysis on that. Check here:

  • Hossein
1 Like

Thank you very much for your helpful information, Hossein.
I'll follow your direction and see if I can make sense of my processing pipeline!

Thank you & talk soon,
Iris

Dear Francois:
Thank you for your previous responses regarding my first brainstorm analysis pipeline.
I have further questions that need your great help for me to be able to straighten out my pipeline: Right now my processing steps include all pre-processing steps adhering the tutorials, followed by importing epochs (-1000ms to 1000ms around each event, for connectivity analysis), and then Head Model computation, and then computing Noise Covariance, followed by source estimation. Do these steps order looks correct to you?

For source estimation, I select the following options:
Method: MNI
Measure: Current density map
Source Model: Constrained: Normal to cortex
=> Since I don't use the LCMV beamformer method, I assume I don't need to calculate "data covariance" but I do need to calculate the noise covariance (before source estimation), am I correct?

I am planning to conduct the Amplitude Envelope Correlation and the Phase Locking connectivity analyses using Brainstorm, and I have an ROI region generated from another study (DTI study) which includes both white matter and grey matter areas surrounding the hippocampus. I want to conduct the connectivity analysis using the ROI/scout in 2 ways: one is to apply the ROI mask generated from this other study as the scout for the current connectivity analysis, and the other way is to directly use the atlas that Brainstorm has (e.g., the freesurfer .aseg atlas; the anatomical data were processed through the freesurfer pipeline).

My first question is: Earlier you mentioned that I can design my scout on the standard brain (template) space and project it to each subject using the "Scout > Project" function in the Scout tab. I wonder if Brainstorm takes .nii mask files (binarised) as the scout region, either in the MNI space, or already in the subject anatomical/native (both of which I have) for the current connectivity analysis?

I assume an ROI/scout that includes the white matter region is permitted for Brainstorm to carry out the connectivity analyses, is that right? In this case, my second question is I wonder if during my Head Model step, I should select the "MRI volume" instead of the "Cortex surface"?

On the Source Estimation tutorial page, it says the "Constrained: Normal to cortex" option is "only for the surface grids" and the "Unconstrained" option can be for either "surface" or "volume" grids, does this mean if I need to use the non-surface grids (the volume-based grids) in my Head Model step (by selecting the "MRI volume" instead of the "Cortex surface"), I'll have to use "unconstrained" in my source estimation? I remember both you and Hossein has advised that it's better not to use unconstrained source model for connectivity analysis.. So how do I achieve this goal of using a deep-brain structure ROI (hippocampus) that includes both white matter and grey matter regions?

If I use the freesurfer .aseg subcortical atlas as the scout, which is a volume-based atlas, does it also mean I have to use the "MRI volume" option for my Head Model step instead of using the "Cortex surface" option? Or they don't need to be related and I can still use the "Cortex surface" in my Head model and use the .aseg atlas as scouts (This way I can keep the source estimation model as "constrained")?

Finally, after I straighten out these steps, for my connectivity analysis, if I am only interested in seeing how the connectivity is between my ROI/scout with the rest of the brain, what would be the best way of doing the analysis? Should I still calculate N x N matrix first for the whole brain, or I can just run 1 x N using the ROI/scout as the "1" seed? But the scout will contain more than one vertex/grids so I wonder how it would be considered as only "1" seed in a 1xN connectivity analysis.

Thank you very much for your attention and your most helpful answers for me.

Best,
Yuwen

Right now my processing steps include all pre-processing steps adhering the tutorials, followed by importing epochs (-1000ms to 1000ms around each event, for connectivity analysis), and then Head Model computation, and then computing Noise Covariance, followed by source estimation. Do these steps order looks correct to you?

Yes, this looks OK.

Since I don't use the LCMV beamformer method, I assume I don't need to calculate "data covariance" but I do need to calculate the noise covariance (before source estimation), am I correct?

No, you don't need a data covariance.

Earlier you mentioned that I can design my scout on the standard brain (template) space and project it to each subject using the "Scout > Project" function in the Scout tab. I wonder if Brainstorm takes .nii mask files (binarised) as the scout region, either in the MNI space, or already in the subject anatomical/native (both of which I have) for the current connectivity analysis?

If you work with surfaces (ie. source space = cortex surface), the ROIs (=scouts) are subsets of the vertices of the cortex surface, not .nii volumes. It is recommended to work only with ROIs defined on the cortex surface (either scouts created manually, or available in one of the surface anatomical atlases).

You can import volume masks and convert them into surface scouts (tab Scout, menu Atlas>Load atlas, file format "Volume mask or atlas"), but this is not very accurate and leads to weird looking scouts. You can chose whether the input volume is in subject or MNI space (just change the file format accordingly).

I assume an ROI/scout that includes the white matter region is permitted for Brainstorm to carry out the connectivity analyses, is that right?

There is no notion of grey or white matter if you work with surface-based source models, you just have one surface (which can be either the grey/white interface, or the pial surface).

In this case, my second question is I wonder if during my Head Model step, I should select the "MRI volume" instead of the "Cortex surface"?

You could. But then you wouldn't be able to constrain the orientation of the dipoles in the source estimation, you would obtain unconstrained source maps with three dipoles at each location. This is a problem for the connectivity analysis, as I think the connectivity processes are still not working for unconstrained sources.
@hossein27en @Sylvain @pantazis @John_Mosher : Am I wrong?

So how do I achieve this goal of using a deep-brain structure ROI (hippocampus) that includes both white matter and grey matter regions?

Do not focus so much on this grey/white matter localization. Source imaging with MEG or EEG has a poor spatial resolution (in the range of the centimeter)... Anything in the surroundings of your region of interest should give you similar time series, no matter the type of source model you are using (surface/volume, constrained/unconstrained orientations). For complex processing of the source signals, use a simple source model, otherwise the complexity of your analysis pipeline will be exponential...

If you are not sure about the implications of the different source modeling approaches, I'd recommend you start with testing and comparing them on one single subject. Ideally, if your results are robust enough, you should make similar observations with all the approaches, they are essentially different ways of representing the same information.

If I use the freesurfer .aseg subcortical atlas as the scout, which is a volume-based atlas, does it also mean I have to use the "MRI volume" option for my Head Model step instead of using the "Cortex surface" option? Or they don't need to be related and I can still use the "Cortex surface" in my Head model and use the .aseg atlas as scouts (This way I can keep the source estimation model as "constrained")?

If you really want to have the hippocampus represented in your model, you could maybe use a "mixed source model", with only constrained orientations for all the regions: http://neuroimage.usc.edu/brainstorm/Tutorials/DeepAtlas
But again, this will make the connectivity analysis more complicated...

Finally, after I straighten out these steps, for my connectivity analysis, if I am only interested in seeing how the connectivity is between my ROI/scout with the rest of the brain, what would be the best way of doing the analysis? Should I still calculate N x N matrix first for the whole brain, or I can just run 1 x N using the ROI/scout as the "1" seed? But the scout will contain more than one vertex/grids so I wonder how it would be considered as only "1" seed in a 1xN connectivity analysis.

If you are interested in the connectivity value between one ROI and other ROIs, then select a "1xN" process. The values for the different vertices withing one ROI are grouped in a similar way as for the time-frequency analysis: scout function applied before or after.
https://neuroimage.usc.edu/brainstorm/Tutorials/TimeFrequency#Scouts

Dear Francois:
Thank you a lot for your detailed responses for my questions. I remember you previously mentioned that hippocampal sources are best modeled using constrained approach. Also, the hippocampus scout can be generated based on the surface data (detailed below). So to make things simple and straightforward, should I better use: "Cortex surface based" head model, and "constrained" for source estimation? And when generating the scouts, based on the following information, I assume the subcortical scouts can be generated as the surface vertex for connectivity analyses?

https://neuroimage.usc.edu/brainstorm/Tutorials/LabelFreeSurfer

See section:

Subcortical structures: aseg atlas

Then should I still need to follow:
https://neuroimage.usc.edu/brainstorm/Tutorials/DeepAtlas
?

Thank you,
Yuwen

Thank you very much,
Yuwen

Yes, that would probably work.
Try on one subject and see what you get.