Tutorial 27: Workflows

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet

This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment.
We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.

What is your question?

The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings.

What dimension?

What kind of experiment?

What level of precision?

What statistical test?

Design considerations

Common pre-processing pipeline

Most event-related studies can start with the pipeline we've introduced in these tutorials.

  1. Import the anatomy of the subject (or use a template for all the subjects).
  2. Access the recordings:
    • Link the continuous recordings to the Brainstorm database.
    • Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
    • Edit the event markers: fix the delays of the triggers, mark additional events.
  3. Pre-process the signals:
    • Evaluate the quality of the recordings with a power spectral density plot (PSD).
    • Apply frequency filters (low-pass, high-pass, notch).
    • Identify bad channels and bad segments.
    • Correct for artifacts with SSP or ICA.
  4. Import the recordings in the database: epochs around some markers of interest.

How many trials to include?

EEG recordings

Average

Statistics: Single subject

Statistics: Group analysis, within subject

Statistics: Group analysis, between subjects

MEG recordings

Average

Statistics: Single subject

Statistics: Group analysis

Constrained cortical sources

Average: Single subject

  1. Sensor average: Compute one sensor-level average per acquisition run and per condition.

  2. Sources: Estimate sources for each average (constrained, no normalization).

  3. Source average: Average the source-level run averages to get one subject average.
    Compute a weighted average to balance for different numbers of trials across runs.

  4. Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
    Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.

  5. Low-pass filter your evoked responses (optional).
    If you filter your data, do it after the noise normalization so the variance is not underestimated.

  6. Do not rectify the cortical maps, but display them as absolute values if needed.

Average: Group analysis

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Rectify the cortical maps (apply an absolute value).
    Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.

  3. Project the individual source maps on a template (only when using the individual brains).
    For more details, see tutorial: Group analysis: Subject coregistration.

  4. Group average: Compute grand averages of all the subjects.
    Do not use a weighted average: all the subjects should have the same weight in this average.

  5. Smooth spatially the source maps (optional).
    You can smooth after step #4 for computing non-parametric statistics with the subject averages. For a simple group average, it is equivalent to smooth before of after computing the average.

Difference of averages: Within subject

  1. Sensor average: Compute one sensor-level average per acquisition run and condition.

  2. Sources: Estimate sources for each average (constrained, no normalization).

  3. Source average: Average the source-level session averages to get one subject average.

  4. Subject difference: Compute the difference between conditions for each subject #i: (Ai-Bi)

  5. Normalize the difference: Z-score wrt baseline (no absolute value): Z(Ai-Bi)

  6. Low-pass filter the difference (optional)

  7. Rectify the difference (apply an absolute value): |Z(Ai-Bi)|

  8. Project the individual difference on a template (only when using the individual brains).

  9. Group average: Compute grand averages of all the subjects: avg(|Z(Ai-Bi)|).

  10. Smooth spatially the source maps (optional).

Difference of averages: Between subjects

  1. Grand averages: Compute averages for groups #1 and #2 as in Average:Group analysis.

  2. Difference: Compute the difference between group-level averages: avg(|G1|)-avg(|G2|)

  3. Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.

Statistics: Single subject

  1. Sources: Compute source maps for each trial (constrained, no normalization).

  2. A = B:

    • Parametric or non-parametric two-sample t-test, two-tailed.
    • Identify correctly where and when the conditions are different (sign not meaningful).

  3. Directionality: Additional step to know which condition has higher values.

    • Compute the difference of rectified averages: |avg(Ai)|-|avg(Bi)|
    • Combine the significance level (t-test) with the direction (difference): See details.

Statistics: Group analysis, within subject [???]

Statistics: Group analysis, between subjects [???]

Unconstrained cortical sources [???]

Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z).
We want only one statistic and one p-value per grid point in output.

Average: Single subject [???]

  1. Sensor average: Compute one sensor-level average per acquisition run and per condition.

  2. Sources: Estimate sources for each average (unconstrained, no normalization).

  3. Source average: Average the source-level run averages to get one subject average.

  4. Low-pass filter your evoked responses (optional).

  5. Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
    [???] HOW TO NORMALIZE UNCONSTRAINED MAPS WRT BASELINE?

Average: Group analysis [???]

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Flatten the cortical map: compute the norm of the three orientations at each grid point.

  3. Project the individual source maps on a template (only when using the individual brains).

  4. Group average: Compute grand averages of all the subjects.

Difference of averages: Within subject [???]

  1. Subject averages: Compute within-subject averages for conditions A and B, as described above.

  2. Subject difference: Compute the difference between conditions for each subject (A-B).

  3. Flatten the cortical map: compute the norm of the three orientations at each grid point.

  4. Project the individual difference on a template.

  5. Group average: Compute grand averages of all the subjects: average_subjects(|Ai-Bi|).

Difference of averages: Between subjects [???]

  1. Subject averages: Compute within-subject averages for conditions A and B, as described above.

  2. Grand averages: Compute the group-level averages for groups #1 and #2 as described in "Average: Group analysis"

  3. Difference: Compute the difference between group-level averages: avg(|G1|)-avg(|G2|)

  4. Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.

Statistics: Single subject [???]

  1. Sources: Compute source maps for each trial (unconstrained, no normalization)

  2. Statistics: Compare all the trials of condition A vs all the trials of condition B.

  3. |A| = |B|

    • Non-parametric tests only, independent, test norm, two-tailed.

    • Indicates which condition corresponds to a stronger brain response (for a known effect).

Statistics: Group analysis, within subject [???]

Statistics: Group analysis, between subjects [???]

Regions of interest (scouts) [???]

Statistics: Single subject [???]

Statistics: Group analysis, within subject [???]

Time-frequency maps [???]

Average: Single subject [???]

  1. Time-frequency maps: Compute time-frequency maps for each trial.

    • Apply the default measure: magnitude for Hilbert transform, power for Morlet wavelets.
    • Do not normalize the source maps: no Z-score or ERS/ERD.
    • The values are all strictly positive, there is no sign ambiguity as for recordings or sources.
  2. Average all the time-frequency maps together, for each condition separately.

    • If you are averaging time-frequency maps computed on sensor-level data, the same limitations apply as for averaging sensor level data (see sections about MEG and EEG recordings above).

Average: Group analysis [???]

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Normalize: [???] Zscore, ERD/ERS, or FieldTrip?
    Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.

  3. Group average: Compute grand averages of all the subjects.

Difference of averages [???]

  1. Group average: Compute the averages for conditions A and B as in Average: Group analysis.

  2. Difference: Compute the difference between group-level averages: avg(A)-avg(B).

Statistics: Single subject [???]

  1. Time-frequency maps: Compute time-frequency maps for each trial.

    • Apply the default measure: magnitude for Hilbert transform, power for Morlet wavelets.
    • Do not normalize the source maps: no Z-score or ERS/ERD.
    • The values are all strictly positive, there is no sign ambiguity as for recordings or sources.
  2. Statistics: Compare all the trials of condition A vs all the trials of condition B.

  3. A = B [???]

    • Parametric or non-parametric t-test, independent, two-tailed. [???]

    • Indicates both where there is a significant effect and what is its direction (no sign ambiguity).

Statistics: Group analysis, within subject [???]

Advanced

Workflow: Current problems [TODO]

The following inconsistencies are still present in the documentation. We are actively working on these issues and will update this tutorial as soon as we found solutions.








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Workflows (last edited 2016-05-18 22:39:03 by FrancoisTadel)