Tutorial 27: Group analysis

[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]

Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet

This page provides some general recommendations for your group analysis. It is not directly related with the auditory dataset, but provides guidelines that have to be considered for any MEG/EEG experiment.

Important physical limitations and implications

Recommendations for averaging/constrasting different types of data.

MEG sensor data

Cortical maps

Regions of interest (scouts)

Design considerations

Summary of the analysis

Workflow single subject (for single trial analysis)

  1. Compute source map for each trial (constrained/unconstrained, no normalization)
  2. Estimate differences between two conditions A/B for which we have multiple trials

Workflow single subject (for group analysis)

  1. Compute sensor average per acquisition session => Session-level average for each condition

  2. Compute source map for each session average (constrained or unconstrained, no normalization)

  3. Average source maps across sessions => Subject-level average for each condition

  4. Optional: Low-pass filter < 40Hz for evoked responses

  5. Normalize the subject min-norm averages: Z-score vs. baseline

  6. Absolute value or norm for display

Workflow group analysis

  1. Compute subject-level averages min-norm maps, with same number of trials

  2. Normalize the subject min-norm averages: Z-score vs. baseline (no absolute value)

  3. Project the individual source maps on a template (no absolute value)

  4. Constrained sources: Smooth spatially the sources (no absolute value)

  5. Compute grand averages or other group-level statistics (signed or absolute)

Within-subject statistics

For one unique subject, test for significant differences between two experimental conditions:

Sensor recordings:

Constrained source maps (one value per vertex):

Unconstrained source maps (three values per vertex):

Regions of interest (scouts):

Time-frequency maps:

Between-subject averages [TODO]

Between-subject statistics [TODO]

Subject averages

You need first to process the data separately for each subject:

  1. Compute the subject-level averages, using the same number of trials for each subject.
    Sources: Average the non-normalized minimum norm maps (current density maps, no Z-score).

  2. Sources and time-frequency: Normalize the data to bring the different subjects to the same range of values (Z-score normalization with respect to a baseline - never apply an absolute value here).

  3. [???] Absolue value? [???]
  4. Sources computed on individual brains: Project the individual source maps on a template (see the coregistration tutorial). Not needed if the sources were estimated directly on the template anatomy.
    Note: We evaluated the alternative order (project the sources and then normalize): it doesn't seem to be making a significant difference. It's more practical then to normalize at the subject level before projecting the sources on the template, so that we have normalized maps to look at for each subject in the database.

  5. Constrained sources: Smooth spatially the sources, to make sure the brain responses are aligned. Problem: This is only possible after applying an absolute value, smoothing in relative values do not make sense, as the positive and negative signals and the two sides of a sulcus would cancel out. [TODO]

Group statistic

Two group analysis scenarios are possible:

Paired tests

Regions of interest (scouts)

Advanced

Workflow: Current problems [TODO]

The following inconsistencies are still present in the documentation. We are actively working on these issues and will update this tutorial as soon as we found solutions.








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/Workflows (last edited 2016-02-02 00:34:15 by FrancoisTadel)