Tutorial 27: Workflows

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet

This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment. We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.

Contents

What is your question?

The most appropriate analysis pipeline for your data depends on the question you are trying to answer.

What is the objective you have with your data?

What are the dimensions you want to explore?

What level of precisions you want to get?

[TODO: WHEN TO USE WHAT]

Common pre-processing pipeline

All the event-related studies can start with the pipeline we've introduced in these beginners' tutorials.

  1. Import the anatomy of the subject (or use a template for all the subjects).
  2. Access the recordings:
    • Link the continuous recordings to the Brainstorm database.
    • Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
    • Edit the event markers: fix the delays of the triggers, mark additional events.
  3. Pre-process the signals:
    • Evaluate the quality of the recordings with a power spectrum density (PSD).
    • Apply frequency filters (low-pass, high-pass, notch).
    • Identify bad channels and bad segments.
    • Correct for artifacts with SSP or ICA.
  4. Import the recordings in the database: epochs around some markers of interest.

EEG recordings

Average

Within subject statistics

Between subjects statistics

MEG recordings

Average

Within subject statistics

Between subjects statistics

Constrained cortical sources

Within-subject average

  1. Sensor average: Compute one sensor-level average per acquisition session and condition (MEG/EEG).
    Use the same number of trials for all the averages.

  2. Sources: Estimate sources for each session average (constrained or unconstrained, no normalization).

  3. Source average: Average the source-level session averages to get one subject average.

  4. Low-pass filter < 40Hz for evoked responses (optional)

  5. Normalize the subject min-norm averages: Z-score vs. baseline (no absolute value).

  6. Do not rectify the cortical maps, but display them in absolute values.

Between-subjects average

  1. Subject averages: Compute the within-subject averages for all the subjects, as described just above.

  2. Rectify the cortical maps (apply an absolute value).
    Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.

  3. Project the individual source maps on a template.

  4. Smooth spatially the sources.
    Justification: The effects observed with constrained cortical maps may be artifically very focal and not overlapping very well between subjects. Smoothing the cortical maps may help the activated regions overlap between subjects.

  5. Group average: Compute grand averages of all the subjects.

Between-subject difference of average

  1. Subject averages: For each subject, compute the within-subject averages for conditions A and B, as described just above.

  2. Subject difference: Compute the difference between conditions for each subject (A-B).

  3. Rectify the difference of source maps (apply an absolute value).

  4. Project the individual difference on a template.

  5. Smooth spatially the sources.

  6. Group average: Compute grand averages of all the subjects.

Within-subject statistics

  1. Sources: Compute source maps for each trial (constrained or unconstrained, no normalization)

  2. Statistics: Compare all the trials of condition A vs all the trials of condition B.
    Use as many trials as possible for A and B: No need to have an equal number of trials.

  3. A = B

    • Parametric or non-parametric tests, independent, two-tailed, FDR-corrected.

    • Correct effect size: We identify correctly where and when the conditions are different.
    • Ambiguous sign: We cannot say which condition has the stronger response.
  4. |A| = |B|

    • Non-parametric tests only, independent, two-tailed, FDR-corrected.

    • Incorrect effect size: Doesn't detect correctly the effects when A and B have opposite signs.
    • Correct sign: We can identify correctly which condition has a stronger response.

Between-subject statistics

  1. Sources: Compute source maps for each trial (constrained or unconstrained, no normalization)

  2. First-level statistic: Compute a t-statistic for the source maps of all the trials A vs. all the trials B.

    • Process2: "Test > Parametric test: Independent": t-test with equal variance

    • Use as many trials as possible for A and B: No need to have an equal number of trials.
    • With a relatively high number of trials, we can consider the t-values follow a Z-distribution.
  3. Second-level statistic: Compute a one-sampled power test based on the subject t-statistic.

    • Process1: "Test > Parametric test against zero": One-sampled Chi-square test

    • This tests for |A-B|=0 using a power test: X = sum(|ti|^2) ~ Chi-square distribution

    • Correct effect size, no sign (cannot detect which condition has the strongest response).

Design consideration

Regions of interest (scouts)

Within-subject statistics

For one unique subject, test for significant differences between two experimental conditions:

Unconstrained source maps

Regions of interest (scouts)

Time-frequency maps

Between-subject statistics [TODO]

Subject averages

You need first to process the data separately for each subject:

  1. Compute the subject-level averages, using the same number of trials for each subject.
    Sources: Average the non-normalized minimum norm maps (current density maps, no Z-score).

  2. Sources and time-frequency: Normalize the data to bring the different subjects to the same range of values (Z-score normalization with respect to a baseline - never apply an absolute value here).

  3. Sources computed on individual brains: Project the individual source maps on a template (see the coregistration tutorial). Not needed if the sources were estimated directly on the template anatomy.
    Note: We evaluated the alternative order (project the sources and then normalize): it doesn't seem to be making a significant difference. It's more practical then to normalize at the subject level before projecting the sources on the template, so that we have normalized maps to look at for each subject in the database.

  4. Constrained sources: Smooth spatially the sources, to make sure the brain responses are aligned. Problem: This is only possible after applying an absolute value, smoothing in relative values do not make sense, as the positive and negative signals and the two sides of a sulcus would cancel out. [TODO]

Paired tests

Averages

Advanced

Workflow: Current problems [TODO]

The following inconsistencies are still present in the documentation. We are actively working on these issues and will update this tutorial as soon as we found solutions.








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Workflows (last edited 2016-02-08 23:49:39 by FrancoisTadel)