Averaging sources [dSPM}

I am using brainstorm for eeg source localization. I already cleaned and epoched my data in EEGLAB. I had 3 conditions in my experiment and 33 subjects data were collected.

I am confused between two possible pipelines that I can follow after reading the tutorials on the website:

Pipeline 1:
For each condition:

  1. For each subject, average trials within a run.
  2. Calculate sources for each run (using dSPM)
  3. Average sources (weighted average) for all runs. Now we have one source for a subject.
  4. Average sources across subjects.

Pipeline 2:

  1. For each subject, average trials within a run.
  2. Calculate sources for each run (using dSPM)
  3. Baseline correction (using z-score)
  4. Take absolute values for the sources (rectify)
  5. Average sources across subjects.

For pipeline 2, it is as per the link here: https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Constrained_cortical_sources

However, I am not sure which method this pipeline is for. Maybe it is for MNE and not for dSPM? dSPM results aare already normalized with respect to the baseline, so maybe Pipeline 1 is the way?

Any lead on this is very much appreciated.

Indeed normalization is only needed in minimum-norm solutions as the range of the current density may vary among subjects. In the case of dSPM, the current densities are already normalized with the noise covariance, and provided in z-scores

Please pay attention to the step of averaging normalized maps:
https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Averaging_normalized_values

Thank you Raymundo for you response and for sharing the link with me.

I am slightly confused about scaling normalized maps.

It says in the article: "This should be used only for visualization and interpretation, scaled dSPM should never be averaged or used for any other statistical analysis."

  1. So, if I average normalized maps within subject, and I am interested to further average over subject, do I have the scaling step in-between or not?

  2. Does this mean this scaling is done whenever I want to visualize? For example, after average across my 33 subjects, I multiply the average by sqrt(33) and visualize that? But for analysis I stick to the non-scaled version?