Tutorial 27: Workflows

Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, John Mosher, Richard Leahy, Sylvain Baillet

This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment.
We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.

What is your question?

The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings.

What dimension?

What kind of experiment?

What level of precision?

What statistical test?

Design considerations

Common pre-processing pipeline

Most event-related studies can start with the pipeline we've introduced in these tutorials.

  1. Import the anatomy of the subject (or use a template for all the subjects).
  2. Access the recordings:
    • Link the continuous recordings to the Brainstorm database.
    • Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
    • Edit the event markers: fix the delays of the triggers, mark additional events.
  3. Pre-process the signals:
    • Evaluate the quality of the recordings with a power spectral density plot (PSD).
    • Apply frequency filters (low-pass, high-pass, notch).
    • Identify bad channels and bad segments.
    • Correct for artifacts with SSP or ICA.
  4. Import the recordings in the database: epochs around some markers of interest.

How many trials to include?

EEG recordings

Average

Statistics: Single subject

Statistics: Group analysis, within subject

Statistics: Group analysis, between subjects

MEG recordings

Average

Statistics: Single subject

Statistics: Group analysis

Constrained cortical sources

Average: Single subject

  1. Sensor average: Compute one sensor-level average per acquisition run and per condition.

  2. Sources: Estimate sources for each average (constrained, no normalization).

  3. Source average: Average the source-level run averages to get one subject average.
    Compute a weighted average to balance for different numbers of trials across runs.

  4. Low-pass filter your evoked responses (optional).

    • If you filter the average before normalizing wrt baseline, it will lead to an underestimation of the baseline variance, and therefore to an overestimation of the Z scores computed in the next step, especially if the baseline is too short (typically less than 200 time points). The filter increases the autocorrelation of the time series, and therefore biases the estimation of the signal variance (Wikipedia).

    • You have to take into account the possible edge effects due to the filter. You can either extract a small time window or exclude the beginning of the baseline for the normalization.

  5. Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
    Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.

  6. Do not rectify the cortical maps, but display them as absolute values if needed.

Average: Group analysis

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Rectify the cortical maps (process: Pre-process > Absolute value).
    Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.

  3. Project the individual source maps on a template (only when using the individual brains).
    For more details, see tutorial: Group analysis: Subject coregistration.

  4. Group average: Compute grand averages of all the subjects.
    Do not use a weighted average: all the subjects should have the same weight in this average.

  5. Smooth spatially the source maps (optional).
    You can smooth after step #3 for computing non-parametric statistics with the subject averages. For a simple group average, it is equivalent to smooth before of after computing the average.

Difference of averages: Within subject

  1. Sensor average: Compute one sensor-level average per acquisition run and condition.

  2. Sources: Estimate sources for each average (constrained, no normalization).

  3. Source average: Average the source-level session averages to get one subject average.

  4. Subject difference: Compute the difference between conditions for each subject #i: (Ai-Bi)

  5. Low-pass filter the difference (optional)

  6. Normalize the difference: Z-score wrt baseline (no absolute value): Z(Ai-Bi)

  7. Rectify the difference (apply an absolute value): |Z(Ai-Bi)|

  8. Project the individual difference on a template (only when using the individual brains).

  9. Group average: Compute grand averages of all the subjects: avg(|Z(Ai-Bi)|).

  10. Smooth spatially the source maps (optional).

Difference of averages: Between subjects

  1. Grand averages: Compute averages for groups #1 and #2 as in Average:Group analysis.

  2. Difference: Compute the difference between group-level averages: avg(|G1|)-avg(|G2|)

  3. Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.

Statistics: Single subject

Statistics: Group analysis, within subject

Statistics: Group analysis, between subjects

Unconstrained cortical sources

Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z).
We want only one statistic and one p-value per grid point in output.

Averages

Statistics: Single subject

Statistics: Group analysis, within subject

Statistics: Group analysis, between subjects

Regions of interest (scouts)

Statistics: Single subject

Statistics: Group analysis, within subject

Time-frequency maps

Average: Single subject

Average: Group analysis

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Normalize the subject averages: ERD/ERS or Z-score wrt baseline.

  3. Group average: Compute grand averages of all the subjects.

  4. Difference of averages: Simply compute the difference of the group averages.

Statistics: Single subject

Statistics: Group analysis, within subject

Statistics: Group analysis, between subjects








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Workflows (last edited 2023-06-05 18:42:05 by SylvainBaillet)