I get the connectivity (PLV) of only some scouts of the volume atlas

Hi. I'm new to Brainstorm so apologies if I ask something obvious.

I'm trying to compute the brain connectivity source estimation using epilepsy EEG data.

To do this, I've computed the head model using the MRI volume as the source space. Then I used the DKT atlas offered by Brainstorm to compute a volume atlas. I also used the AA3 atlas and did the same, for comparison.

As should be the case, I obtained the correct volume parcellation, as can be seen in the figures (AAL3 135 scouts or so, DKT 98 scouts or so). However, when I computed the connectivity using PLV at the next step, only some of the volume scouts seem to have been used for this computation. DKT resulted in 24 connected scouts and AAL3 in 18.

Could you please help me understand if I'm doing something wrong?

Thank you!

1 Like
  1. If you have only a subset of the scouts in the output files, it is most likely because you selected only some of the them in the process options. In the PLV process options, make sure you select all the scouts (click on the list of scouts and press CTRL+A to select them all).

  2. Your screen captures show connectivity matrices [Nscouts x Nsources]. Is this really what you expected to obtain? Have you used the Process2 tab for computing these files?
    To get a [Nscouts x Nscouts] connectivity matrix, use the PLV [NxN] process from Process1.

  3. The option "Scout function: after" might be removed in a future release. To save some computation time, you can use the option "before" instead...
    @Raymundo.Cassani Is this correct?

1 Like

--> I thought the "After" option was the correct one and that the "before" option leads to noisy data... (???)

Well, yes, this is what we initially promoted without testing it much.
We're now in the process of evaluating this better, and it doesn't look that this after option brings much improvement compared to the crazy extra computation time.
@Raymundo.Cassani @Sylvain Is this correct?

1 Like

ERRATUM:
After discussing today with the rest of the team, I realized that it is still not clear what we will recommend in the tutorial... this is still under evaluation.

At the moment:

  • it is preferable to compute connectivity only on CONSTRAINED source maps (the unconstrained case might still be handled in a suboptimal way)
  • Scout function before/after: no clear recommendation for the moment...

We should reach a consensus and finish writing a tutorial for connectivity before the end of the year.

1 Like

:open_mouth: thanks!

Thank you very much @Francois.

  1. Indeed, I must have used only some of the scouts.
  2. Gotcha. Although I'm not sure at which point I selected the [Nscouts x Nsources] option. Anyway, I'll sort it out.
  3. Very important: My laptop cannot finish the computation (I get an error - 'array exceeds maximum array size preference') if I use the "Scout function: after" option. Please keep the "Scout function: before", it would be greatly appreciated.

Have a lovely day!

For some reason, the CONSTRAINED source maps were unavailable to me (grayed out). Why might this be, could you please tell?

Indeed, this option "after" makes the complexity of the problem explode...
This is under evaluation, I hope we'll find compromises between the quality of the outputs and the computation.

For some reason, the CONSTRAINED source maps were unavailable to me (grayed out).

Because you have a volume head model selected as the default head model in your database explorer.

1 Like

Thank you @Francois. Indeed, I wanted to compute the volume head model so I guess I'll stay with the Unconstrained option.

I have one more question if you don't mind :slight_smile:

I am working at the moment on estimating the epileptic brain activity at the source level. I have used the volume head model to estimate the activity sources using MNI, dSPM. Afterwards, I want to project my sources onto the template grid and compute the connectivity.

However, the source estimation using unscaled dSPM shows very high levels of synchronisation when visualised. I see that this is probably simply because the activation scale is computed for each window. Therefore, I decided to normalise the results and I computed the scale-average dSPM. The source estimation looks great now as the activation scale is normalised.

I am confused however by the message I got when scaling the dSPM source estimation:

Do I understand correctly that the resulting scaled dSPM cannot be projected onto the template grid then used for connectivity computation (since it can be used for visualisation and interpretation only)?

Apologies if this question is trivial, I googled it before deciding to ask. Thank you so much again for your invaluable help.

I have one more question if you don't mind

When discussing a new topic, please create a new thread on the forum, otherwise other users won't be able to access the answers easily.

I have used the volume head model to estimate the activity sources using MNI, dSPM. Afterwards, I want to project my sources onto the template grid and compute the connectivity.

Make sure you are doing this using this procedure:
https://neuroimage.usc.edu/brainstorm/Tutorials/CoregisterSubjects#Volume_source_models

However, the source estimation using unscaled dSPM shows very high levels of synchronisation when visualised. I see that this is probably simply because the activation scale is computed for each window.

I'm not sure I understand this.
If you are not sure that your source maps are correct, post some screen captures here (eg. MRI Viewer at the peak of a spike, scout time series around the seizure onset zone).

Therefore, I decided to normalise the results and I computed the scale-average dSPM. The source estimation looks great now as the activation scale is normalised.

The "unscaled" and "scaled" dSPM maps should look exactly the same visually, except for the values in the colorbar. The only difference is a fixed scaling factor applied uniformly to all the sources and all the time points. Therefore I'm not sure what you did is correct.

Do I understand correctly that the resulting scaled dSPM cannot be projected onto the template grid then used for connectivity computation (since it can be used for visualisation and interpretation only)?

I think the message we give in the process is pretty clear: use the unscaled dSPM values for everything except for final plots where you want the range of values to reflect correctly the statistical significance of the results.

https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Averaging_normalized_values

1 Like

Hi Francois,

I have a question that may or may not have a clear answer.

I am interested in functional connectivity. However, I am using an anatomical template, and thus, have been using unconstrained sources as recommended in the tutorials. I then run flatten using PCA to get a single series of values per vertex, and then run functional connectivity on this data.

Given what you posted, if one is using a template and wants to compute functional connectivity, would you recommend unconstrained or constrained?

Thank you,
Paul

When the main goal of the analysis is the precise localization of the brain sources, the recommendation is clear when using an anatomy template: use unconstrained source orientations.

When the goal is connectivity analysis, we are at the present time investigating the outcome of the different options. The PCA flattening of the unconstrained source maps seems to lead to the loss of some obvious interactions, therefore we are not sure we will keep this as our main recommendation. We might end-up recommending using constrained source orientations, but we will need a few more weeks (or months...) before being to able to answer this question.

Therefore: no clear answer for the moment. I apologize for this lack of information, there is no consensus in the literature for these topics yet, this is still an active field of research.