Pipeline for Source Estimation and statistical analysis of EEG data

Hello Brainstorm community!

I am trying to estimate the source activity of EEG data for two different experimental conditions and then run some statistical analysis.

Since I am new to Brainstorm and the results I am finding are not so straightforward to be interpreted, I would highly appreciate if anybody could provide me some feedback about the pipeline I have adopted! In the following you can find a schematic overview of the procedure that I have implemented so far:

  1. I have preprocessed and epoched EEG data of 25 subjects on EEGLAB. Then I created a new protocol on Brainstorm, selecting "Yes, use protocol's default anatomy" and "Yes, use one channel file per subject" for the default anatomy and channel file, respectively.

  2. I have imported the EEG data, separating the trials into two folders according to the experimental condition.

  3. EEG channel locations were adjusted by visual inspection from: MRI registration -> Edit..

  4. I have computed the head model selecting "Cortex surface" and OpenMEEG BEM as forward modelling method, using then the default options for the OpenMEEG.

  5. I averaged trials for each subject among the same experimental conditions.

  6. After adding an identity matrix for the noise covariance I computed the sources (creating a shared imaging kernel for each subject): minimum norm imaging as method, current density map as measure and constrained source model. Further, I did not use depth weighting and no covariance regularization.

  7. Finally I moved the reconstructed sources of condition A (grand average for each subject) in the left window of process2 and all sources of condition B into the right window. Then I performed both paired parametric tests (FDR corrected) and non-parametric tests using the ft_sourcestatistics option.

Results are similar between the two analyses. The unexpected result is twofold:

  • I found the main difference between the two conditions in a very deep location (which is wired considering that the solution adopted for the inverse problem, i.e., the unweighted MNE, should emphasize cortical activations)
  • Such brain area presents both positive and negative t within a very small region (meaning that it should be more (and less) activated at the same time for the same experimental condition).

I have attached an image of the result obtained with the parametric test. Do you have suggestions about the pipeline or comments about the plausibility of the results?

Thank you very much for the help!

Your processing pipeline looks OK.
Some comments below.

After adding an identity matrix for the noise covariance

When only trials are available, computing noise covariance from the concatenated pre-stimulus baselines is advised: https://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance#Variations_on_how_to_estimate_sample_noise_covariance

Further, I did not use depth weighting and no covariance regularization.

Unless you are doing simulation work to evaluate the source modeling algorithms in Brainstorm, I'd recommend you keep all the advanced settings to their default values.

I found the main difference between the two conditions in a very deep location (which is wired considering that the solution adopted for the inverse problem, i.e., the unweighted MNE, should emphasize cortical activations)

These figures do not make sense. There is maybe an error somewhere in the exchange of spatial information between Brainstorm and FieldTrip.
Please start with running the non-parametric tests available in Brainstorm. Post the figures you obtain.

Then post some more figures illustrating the input data that you provided to the stat process (both a view of the database explorer showing the selected files and the files opened as when you double-click on them), and all the options you used in input of the various processes.

Thanks for your quick reply!

I tried to implement your suggestion (computing noise covariance from the concatenated pre-stimulus baselines and keeping all the advanced settings to their default values for the inverse solution). In the following you can find the input options for the various processes, along with the figures of the results:

Step 1: Create new protocol (figure1)

Step 2: Create new subject: (figure2)

Step 3: Import EEG data (already preprocessed and epoched) (figure3)

Step 4: Adjusting electrode position (figure4)

Step 5: Compute head model (figure5)

Step 6: Compute noise covariance matrix (figure6)

Step 7: Import all the 25 subjects (channel locations and head model are the same for all subjects, noise covariance matrix is computed for each subject separately) (figure7)

Step 8: Compute the sources for each subject (figure8)

Step 9: Averaged trials for each subject among the same experimental conditions

For each subject I obtained the following structure: (figure9)

Step 10: Preparing files for the statistic (figure10)

Step 11: Run Non parametric paired test between 200 and 400 ms (figure11)


Example of AVG data of one subject for the two conditions (input of the statistical analysis)
(figure12 figure13 figure 14)

  1. Non parametric test, paired: no comparisons resulted significant after correction (both Bonferroni and FDR), here I show results without corrections for multiple comparisons

  2. Parametric test, paired: comparisons resulted significant after FDR correction (figure16)

  3. ft_sourcestatistics, significant negative cluster: (figure17 figure18)

Finally I report the simple difference between the source activity in the two conditions, which seems to be higher at cortical level, and not in deep regions (figure19)

Basically we are trying to find out the cortical generators of a difference in the P200 amplitude emerged between the two conditions at the sensor level, between 200 and 400 ms after stimulus presentation (figure20)

Here the link with all the figures: WeTransfer - Send Large Files & Share Photos Online - Up to 2GB Free

Thanks for any further help you will provide!

Step 2: Create new subject: (figure2)

Using shared channel files is not recommended, it can be confusing and lead to errors, be extra careful if you do it this way (e.g. you shouldn't be using any SSP, ICA, or re-referencing).

Step 4: Adjusting electrode position (figure4)

Project the electrodes on the scalp.

Except for that, it all looks good.
Figures 15-19: I really don't recommend you display these results in the volume, these displays can be very misleading, as we could see in your first post. If you work with surface-based models, display them on the surface, as least for the first exploration.

If you want to work in volume, this is also a valid alternative, especially when there is not much spatial precision expected. It would produce results that are maybe be more intuitive to interpret for you:

Thanks for your reply!

I do not have the exact positions of the electrodes for each subject, thus I used the same channel file for all the subjects (I fixed the position for one subject and then I copied the "EEGLAB channel" file in all the other subjects' folder). Then I am not programming to perform any ICA or re-referencing on the data since they are already preprocessed.

I made the adjustment you suggested (projecting the electrodes on the scalp by moving the EEG data files in the process1 window -> run -> import -> channel file -> project electrodes on scalp).

Now I found some significance in the cortex surface when I run permutation t-tests paired (correcting with FDR).

I do have I final (hopefully!) question. Is there a way to restrict the statistical analysis over only superficial dipoles, avoiding considering for instance those in more deep brain regions (without using scouts)?

Thank you

Hi Paolo, from your previous comments, I think you're using the Cortex surface as source space in the Head model, right? If so, the dipoles are only assigned to each vertex in the cortical surface, thus there are no dipoles in deep brain regions.


Hi Raymundo, thanks for your reply.

Yes, you were right, maybe the reference to "deep brain regions" could be misleading. I'll try to explain better.

I am using the Cortex Surface; what I want to do is to restrict the analysis to fewer dipoles (without using scouts in the analysis), not considering those cortical dipoles that belong to brain areas whose activity could be more difficult to be estimated with source reconstruction techniques (e.g. cingulate/insular/per-sub callosal cortex etc.. ).

Hope to have clarified the point,


In your experiment, if you have hypotheses related with specific brain regions, you can restrict the statistical analysis to some specific scouts. In that case, in the options of the statistical test, select "use scouts" and select the atlas/scouts you are interested in.

If your goal is to remove from your final display some results that you don't like: you're not supposed to do it. Testing again your data after observing the complete results should be avoided: Circular analysis - Wikipedia
The ROI selection should be hypothesis-driven.