Ambiguity on a pipeline for TF source estimation

Dear BS community,

I am using BS for source localization for EEG data. I am learning how to do this based on the amazing documentation available but I would like to get confidence in the pipeline I am following and ask the experts in case I am doing any mistake in the process.

Description of the dataset:
EEG data collected from 33 subjects, where we had three motor imagery conditions. Each subject performed N trials of motor imagery for each of the conditions, each lasting three seconds. We had a cue for participant to start the motor imagery at every trial, however, we cannot 100% guarantee that they immediately started, so we cannot claim that the trials are 100% time-locked.

For this reason, and for another reason that motor imagery is generally characterized by mu/beta oscillation, we are interested in time-frequency analysis.

Below is the pipeline I am following:

MAIN PIPELINE:
For each condition:

  1. Estimate the source for each trial using MNE (output is in pA.m)
  2. Average sources, as is, within subject (output in pA.m)
  3. Perform z-score normalization with respect to baseline, before the motor imagery trial, for the averaged source.
  4. Take the absolute value of the normalized source.
  5. Average the rectified sources across all subjects.
  6. Visualize brain maps across time (say, 6 brain maps for the 3 seconds)

[Question1: Is step 4 in the appropriate order? Should it be done earlier?]
[Question 2: How would the above pipeline change if I am using dSPM? I am assuming point 3) will be skipped?]
[Question 3: I read that I need to multiply by sqrt(n) when visualizing z-score normalized results. I am assuming this is irrelevant when it comes to performing any stat test, further averaging/processing, etc?]

From those visualization, we are able to confirm our scouts/ROI. This selection can also be reinforced by sensor level time-frequency analysis.

SCOUTS:
For each scout:

  1. Extract the time series of source activity. This is extracted for each subject from step 4) above (after rectification).
  2. Conduct statistical analysis on the time series across the conditions. The average time-series across subjects can also be plotted.

TIME-FREQUENCY ANALYSIS:
For each condition and for each SCOUT:

  1. From the main pipeline, use the output of 1) to estimate TF for each trial.
  2. Average TF maps within a subject.
  3. Perform normalization for the averaged map using ERD/ERS method.
  4. Average maps across subjects.
  5. Plot the average time-frequency map.

I would very much appreciate your feedback on this. Please let me know if you find any issue or mistake in the above pipeline.

This is the right moment, the rectification (abs) happens only when averaging (or comparing) across subjects, as the signs across subjects are ambiguous.

That is right, normalization with baseline is not needed

That is right. It's just for visualization.

As described the Scout analysis will disregard the sign of the z-scores within the scout. As z-scores are rectified before the Scout function, they cannot cancel out in the Scout. I.e., if a scout has two vertices, that present the opposite deviations (+3z and -3z) wrt baseline, the Scout function will show +3z. If the rectification of z-scores happens before or after the Scout function depends on your analysis design.

The TimeFreq analysis seems alright.

Not sure if you have taken a look to the Workflows page in the Brainstorm wiki:
https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows

Thank you so much @Raymundo.Cassani , your confirmations are quite helpful and comforting.

I have implemented the pipeline on my data. Given a scout, I noticed some of the TF maps for some of the subjects have range of values much higher the 100, for example (900, 3000 etc), while other subjects have the values of the TF maps between -100 to 100.

As described above, I calculated the TF maps for a particular scout at the trial level, and then averaged within subject, and then used ERD/ERS normalization.

I have two questions:

  1. Is this okay to have such high ERS values?
  2. How to handle this range difference across subjects before averaging over subjects?

I read through the time-frequency analysis tutorial, and I saw one of the figures showing a range above 200..

Thanks again @Raymundo.Cassani for the priceless contributions to the brainstorm community!