Statistical analysis of continuous EEG data

Hi Brainstorm-team and Brainstorm community!

I’m struggling with finding a proper way to statistically analyze my EEG data. The more I read in the forum or the tutorials, the more confused I get. I was hoping to find some advice here in this forum, as here is combined both knowledge and experience – and I’m lacking on both to be honest.

I want to compare the effects on the EEG band power (delta to gamma) of two meditation (S, N) styles versus a reference (R).

I have 20 subjects. Each trial consists of 10min reference, 10min meditation style S, 25min meditation style N - it is one continuous EEG measurement with set markers. Eyes were closed constantly, no movements involved.

After data cleaning I epoched the data in 1min junks in order to consider bad segments and creating ‘clean’ PSDs.

1.) Primarily I wanted to compare the cross trial effects: S vs. R and N vs. R. Would the independent permutation test (t-test unequal variance) over the PSDs be appropriate in your opinion? With all of the S-PSDs or N-PSDs at Process1 and the R-PSDs at Process2.
Would you recommend to baseline standardize (e.g. ERS/ERD) the PSDs prior the test or any other data normalization?

2.) Secondarily I wanted to compare the intra-trial effects at each subject individually. I would calculate the relative band power and then perform a paired permutation test (S vs. R and N vs. R with paired t-test).

Any thoughts or recommendations on my novice ideas/approach would be highly appreciated.
Thanks, Martin

1 Like

For effects withing a subject: use an independent permutation test with all the PSDs of condition S in FilesA, and all the PSDs of condition R in FilesB.

For testing the group effects, average all the subject-level PSDs for each condition and each subject, and then use a paired permutation test (all the subject-level averages of condition S in FilesA, all the condition R in FilesB).

PSDs would behave just like time-frequency maps, except that there is no time dimension: https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Time-frequency_maps

Would you recommend to baseline standardize (e.g. ERS/ERD) the PSDs prior the test or any other data normalization?

You can't apply any baseline normalization process (ERS/ERD or Z-score) because these files do not contain any time time dimension. It is not possible to compute an average of this value over a baseline. One possibility to normalize the values between subjects is the computation of a relative power. This measure was used by @Sylvain and @Guiomar in the resting state tutorial:
https://neuroimage.usc.edu/brainstorm/Tutorials/RestingOmega#Power_maps

@Sylvain Do you think this is a good idea to use this measure here?

Hi Francois,

thank you very much for your help!

best regards,
Martin