Some questions about resting-state EEG source localization

Dear Brainstorm team,

I am trying to perform a functional connectivity analysis of resting state EEG data in source space (ROI level). There are currently two datasets planned to be used: dataset 1 (with individual T1 MRI), and dataset 2 (with individual defaced T1 MRI).

Preprocessing pipeline of dataset 1 (brief):

Filtering: 1-45 Hz -> downsampling to 250 Hz -> removal of bad segments and channels -> interpolation of bad channel signals -> average reference -> ICA -> removal of artificial components

Preprocessing pipeline of dataset 2 (brief):

downsampling to 250 Hz -> filtering: 1-45 Hz -> removal of bad segments and channels -> ICA -> removal of artificial components.

The duration of each recording is about 3 minutes for dataset 1 and 8 minutes for dataset 2.

I am trying to use Brainstorm for resting-state EEG source localization and encountering several problems as follows:

  1. I first use the default anatomy and tried the whole process, and I wonder if the whole process is reasonable (as shown in the figure below).

  2. Dataset 1 does not record the true electrode positions. Dataset 2 (i.e., the public dataset) provides digitized EEG channel locations ("Polhemus PATRIOT Motion Tracking System (Polhemus, Colchester, VT, USA) localizer together with the Brainstorm toolbox was used to digitize the exact location of each electrode on a participant's head relative to three fiducial points. "). After the import of individual T1, the fiducial points (AC, PC, IH) were determined automatically, but the precision did not seem to be sufficient (as shown in the figure below), so I wonder if I need to set them manually again. For dataset1, is manual alignment the only available method if using individual T1? when importing digitized EEG channel locations for Dataset2Subject1, all electrodes are above the scalp and not attached to the scalp. In addition, only the defaced T1 is available, it seems to adversely affect the setting of fiducial points, co-registration of electrodes with MRI, MRI segmentation, and source localization.
    Figure 2, please click the following link (since new users can only put one embedded media item in a post):
    清华大学云盘

  3. After MNI parcellation is downloaded from Brianstorm (i.e., Brodmann atlas), is the atlas automatically aligned to individual space by Brainstorm?

  4. As mentioned in the preprocessing pipeline, the bad channel signals are interpolated (Dataset1), the interpolation does not add new information. But if this will have a bad effect on source localization or not. A related issue is on dataset2: ICA removed a large number of independent components, for example, there are 60 in the original and 20 in the final remaining, that is, the rank of this recording is only 20 after preprocessing, I wonder if source localization results of such data are reliable?

  5. Dataset 1 is re-referenced to average, and the reference electrode of Dataset 2 is FCz. Is there a recommended reference method? In addition, sometimes, the EEG data are spatially filtered (surface Laplacian) to reduce the effect of volume conduction effect in calculating the channel-level connectivity, I wonder if the spatially filtered data can be used for source localization.

  6. In source files, there is a variable ImagingKernel, can source time series be directly obtained by ImagingKernel (dimension: Nsources x Nchannels) × the recording (dimension: Nchannels x Timepoints)? Will variable Whitener be involved in this?

  7. One more important question is about the sign of the sources. For ROI-based analysis, there seem to be two often used methods: one is to flip dipoles with the opposite signs from the orientation and then average the signals within the ROI; the other is to find the first principal component of the source signals within the ROI. Does the Brainstorm team have any suggestions and recommendations for this issue? In addition, due to the uncertainty of the sign of the source signal, although the signal at the ROI level can be obtained by the above two methods, it seems still uncertain whether the true signs of the signal are the same or opposite between different ROIs. So is this still possible to calculate the phase-synchronization-based functional connectivity between ROIs (e.g., phase-locked value or phase lag index)? A reference mentions that most phase information can be preserved after taking the absolute value of the source signals (see the figure below), I am not able to make a judgment on this. Can the source signals (dimension: 246×120000) obtained in steps 8-10 of the pipeline be used directly to calculate the functional connectivity (e.g., PLI)?
    Figure 3, please click the link in question 2 (since new users can only put one embedded media item in a post).

  8. The number of channels for both EEG datasets is about 60. Does the Brainstorm team have any recommendation on whether to use BEM or FEM header model, to construct constrained sources or unconstrained sources? In addition, based on the responses from the Brianstorm team to others, it seems that the current density map and dSPM are more recommended than sLORETA for source localization.

  9. If all subjects are using the default anatomy, then the source signals are in the standard MNI space, and no registration between subjects is involved. However, if individual T1 is used, the current process seems to end up with the alignment of the atlas (e.g., Brodmann) to individual T1 (individual space), and then the signal of each ROI is extracted. If group analysis will be performed, it seems that the step of mapping the source signals of each subject to the standard space is missing. Another method I thought of is to first align the T1 of all subjects to the standard space, then perform EEG source localization. However, I wonder if source localization using individual T1-> alignment to standard space and alignment individual T1 to standard space -> source localization are equivalent?

  10. Can the source localization process as shown in Figure 1 be fully implemented by MATLAB codes?

Example data for both datasets can be downloaded directly from the following link:
https://cloud.tsinghua.edu.cn/f/433d489f944c43ecacd2/?dl=1

Thank you very much for your help!

Best regards,
Milton

Preprocessing pipeline of dataset 1 (brief):
Preprocessing pipeline of dataset 2 (brief):

If you want to use information from both datasets, you should use the same processing steps in the same order.

After the import of individual T1, the fiducial points (AC, PC, IH) were determined automatically, but the precision did not seem to be sufficient (as shown in the figure below), so I wonder if I need to set them manually again.

The points AC/PC/IH have very little importance. No, you don't need to make any extra effort to make them precise.
The points NAS/LPA/RPA must be precise only if using them for the coregistration MRI-sensors using the subject anatomy (mostly for MEG). If using a template anatomy, or template electrode positions, you need to edit the position of the electrodes manually anyways, so no need to define them precisely.

For dataset1, is manual alignment the only available method if using individual T1?

Various scenarios are described in the tutorials:

If you don't have real 3D coordinates for the electrodes, you have to use template positions. Either you need to adjust them manually on an individual head, or you use default positions on a default head.

when importing digitized EEG channel locations for Dataset2Subject1, all electrodes are above the scalp and not attached to the scalp.

They can be adjusted only if the NAS/LPA/RPA points are defined exactly in the same way in the MRI and in the positions file. See the introduction tutorials.
Otherwise, align them manually as well as you can, and project the electrodes on the scalp.

After MNI parcellation is downloaded from Brianstorm (i.e., Brodmann atlas), is the atlas automatically aligned to individual space by Brainstorm?

If you add it in a subject with a non-linear MNI registration computed with SPM Segment, it should be transferred correctly to the subject space:

As mentioned in the preprocessing pipeline, the bad channel signals are interpolated (Dataset1), the interpolation does not add new information. But if this will have a bad effect on source localization or not.

We don't recommend interpolating bad channels before source modeling. It can only add unwanted bias to the model.

A related issue is on dataset2: ICA removed a large number of independent components, for example, there are 60 in the original and 20 in the final remaining, that is, the rank of this recording is only 20 after preprocessing,

Why do you remove than many components?
You should not remove more than a few components corresponding to the artifacts you want to target.
If you are referring to the PCA that is done before the ICA: this is a different topic. You'd need a good reason to reduce the dimension of the data before the ICA.
https://neuroimage.usc.edu/brainstorm/Tutorials/Epilepsy#Artifact_cleaning_with_ICA

I wonder if source localization results of such data are reliable?

You could do the source localization of one single IC, if you want.
As long as the noise covariance is computed from recordings that have been proprecessed together with the data of interest.

Dataset 1 is re-referenced to average, and the reference electrode of Dataset 2 is FCz. Is there a recommended reference method?

The reference should no impact much the results. Brainstorm will always convert the EEG data to an average reference before inversion anyway.

In addition, sometimes, the EEG data are spatially filtered (surface Laplacian) to reduce the effect of volume conduction effect in calculating the channel-level connectivity, I wonder if the spatially filtered data can be used for source localization.

Never use a Laplacian before minimum norm estimation.

In source files, there is a variable ImagingKernel, can source time series be directly obtained by ImagingKernel (dimension: Nsources x Nchannels) × the recording (dimension: Nchannels x Timepoints)? Will variable Whitener be involved in this?

Sources = ImagingKernel * recordings.

One more important question is about the sign of the sources. For ROI-based analysis, there seem to be two often used methods: one is to flip dipoles with the opposite signs from the orientation and then average the signals within the ROI; the other is to find the first principal component of the source signals within the ROI. Does the Brainstorm team have any suggestions and recommendations for this issue?

At the moment we flip the signals: https://neuroimage.usc.edu/brainstorm/Tutorials/Scouts#Sign_flip
The PCA is under investigation: Pull requests · brainstorm-tools/brainstorm3 · GitHub

In addition, due to the uncertainty of the sign of the source signal, although the signal at the ROI level can be obtained by the above two methods, it seems still uncertain whether the true signs of the signal are the same or opposite between different ROIs.

You should not try to compare the signs obtained between two ROIs.

So is this still possible to calculate the phase-synchronization-based functional connectivity between ROIs (e.g., phase-locked value or phase lag index)? A reference mentions that most phase information can be preserved after taking the absolute value of the source signals (see the figure below), I am not able to make a judgment on this. Can the source signals (dimension: 246×120000) obtained in steps 8-10 of the pipeline be used directly to calculate the functional connectivity (e.g., PLI)?

@Sylvain @Marc.Lalancette @Raymundo.Cassani ?

The number of channels for both EEG datasets is about 60. Does the Brainstorm team have any recommendation on whether to use BEM or FEM header model,

https://neuroimage.usc.edu/brainstorm/Tutorials/HeadModel#Forward_model

to construct constrained sources or unconstrained sources? In addition, based on the responses from the Brianstorm team to others, it seems that the current density map and dSPM are more recommended than sLORETA for source localization.

If all subjects are using the default anatomy, then the source signals are in the standard MNI space, and no registration between subjects is involved. However, if individual T1 is used, the current process seems to end up with the alignment of the atlas (e.g., Brodmann) to individual T1 (individual space), and then the signal of each ROI is extracted. If group analysis will be performed, it seems that the step of mapping the source signals of each subject to the standard space is missing. Another method I thought of is to first align the T1 of all subjects to the standard space, then perform EEG source localization. However, I wonder if source localization using individual T1-> alignment to standard space and alignment individual T1 to standard space -> source localization are equivalent?

If you have access to the anatomy of the subjects: always use it for the source localization.
If you work at the ROI level only, you don't even need to map anything to a template space: just extract the scouts times series for all the subjects.
If you want to average (or compute other stats) on full brain maps, project them all to a template first: https://neuroimage.usc.edu/brainstorm/Tutorials/CoregisterSubjects

Can the source localization process as shown in Figure 1 be fully implemented by MATLAB codes?

Many of the phase-based connectivity measures have been defined or used either using the cross-spectrum or instantaneous phase (from band-passed Hilbert transform). I looked back at the math and I believe both cases are unaffected by a source sign flip. However, if you do not apply the absolute value ("magnitude") to PLV, you would get a sign flip in the result.

Thank you very much for your quick response, @Francois @Marc.Lalancette . That's incredible!

Dataset 2 is an open EEG dataset that provides available pre-processed EEG data. So the pre-processing pipeline of dataset 2 is different from dataset 1. Based on your hint, I rechecked the pre-processing details for dataset 2. PCA was used to reduce data dimensionality, by keeping PCs (N>30) that explain 95% of the total data variance before ICA. This, therefore, leads to a data rank of only about 20 after removing the artifact component. Is re-processing necessary in this case?

In addition, according to the reply of Marc.Lalancette, I also reconfirmed that the sign flip of the scouts' time series does not affect the final results. Instead, phase signal may be distorted in the process of obtaining the signal at the brain region or ROI level (average activity or PCA). However, there seems to be no better method to get the ROI level source signals. I intend to continue to apply the average-based approach.

I checked that if the T1 image is imported and MNI normalization is calculated, then when a new brain altas is imported, the atlas is automatically aligned with T1 in Brainstorm.

Finally, is it necessary to apply a fine-grained brain atlas to obtain signals at the ROI level, considering the precision of the source signals?

Thank you very much!

For source localisation I'd definitely keep all the electrodes. If you have the PCA and ICA components, you could combine them to remove the artefact components from the full data. But ideally, you'd run ICA on all the data if you want to keep it.

That's correct and it's expected to lose some information with data reduction. A finer atlas will keep more information and distort less, but keep in mind that you can't recover more independent source time courses than the number of components kept. So you can judge how fine to make your atlas based on that and how smooth your brain maps look.