Resting state EEG functional connectivity - epoch length and envelope correlation options

Dear all,
we want to calculate the functional connectivity (at the source level) from resting-state EEG recordings using envelope correlation / Hilbert transform. We are still not sure about some details in the analysis pipeline with Brainstorm:
Currently, we have clean rsEEG data in 2s-epochs (~100 epochs) that we import into Brainstorm.
The first question is whether it is better to concatenate the epochs before the connectivity analysis or to do the connectivity analysis on the 2s-data segments.
There are already many similar discussions in the forum and apparently there is no clear answer, the main points raised in the Forum discussions are:
Concatenating the epochs introduces discontinuities and therefore large artifacts in the spectral domain, however 2s data segments might be too short for meaningful outcomes for some connectivity measures.
Any updates or further opinions on that issue, regarding envelope correlation using Hilbert transform?
Our go to solution right now is to concatenate the data (an alternative would be to do the preprocessing again and choose larger time windows, e.g. 5s, would that be a better solution?)

Another question is concerning the options for "estimation window length" and "sliding window overlap". To not introduce discontinuities into the signal, our idea is to use the epoch length, 2s, as the estimation window length and 0% overlap. Is there any expert opinion on whether these parameters make sense?
Does it even make a difference to perform the connectivity analysis on the 2s-data segments or on the concatenated data if the estimation window equals the epoch length (and setting the overlap to 0%)?
The only difference is that the scout function (in our case: PCA) is performed once on the concatenated data instead of on each 2s-epoch, right?

I would appreciate any input, thanks a lot in advance for your time and consideration!

Best regards,
Susanne

Currently, we have clean rsEEG data in 2s-epochs (~100 epochs) that we import into Brainstorm.

Why did you split your data in 2s-blocks in the first place?
If this resting-state data, you would probably benefit from using much longer blocks of recordings, or even the original continuous recordings.

Concatenating the epochs introduces discontinuities and therefore large artifacts in the spectral domain, however 2s data segments might be too short for meaningful outcomes for some connectivity measures.

Exactly.
The longer your epochs the better. 2s is indeed very short, and concatenation is not a good idea.
If you really don't have access to the original data, you might average the results across trials with the option "Average among output files (only for trials)".
@hossein27en @Sylvain @Raymundo.Cassani Would you consider that 2s is too short for this method?

an alternative would be to do the preprocessing again and choose larger time windows, e.g. 5s, would that be a better solution?

Why not even longer blocks?
The upper limit is the amount of data your computer can process in RAM at a time.

Does it even make a difference to perform the connectivity analysis on the 2s-data segments or on the concatenated data if the estimation window equals the epoch length (and setting the overlap to 0%)?

I don't think it makes any difference.
But try it yourself, it's easy to compare on a few trials.

The only difference is that the scout function (in our case: PCA) is performed once on the concatenated data instead of on each 2s-epoch, right?

This is correct, and this is one of the reasons we don't really recommend using this PCA option.

1 Like