23602
Comment:
|
24963
|
Deletions are marked like this. | Additions are marked like this. |
Line 22: | Line 22: |
* '''Within subject''': Contrast two experimental conditions across trials, for one single subject. | * '''Single subject''': Contrast two experimental conditions across trials, for one single subject. |
Line 25: | Line 25: |
* '''Between subjects''': Contrast two experimental conditions across multiple subjects. | * '''Group analysis, within subject''': Contrast two conditions A and B measured for each subject. |
Line 28: | Line 28: |
* '''Between groups''': Contrast two groups of subjects for one given experimental condition. | * '''Group analysis, between subjects''': Contrast two groups of subjects for one condition. |
Line 39: | Line 39: |
* Significance level obtained with '''two-sided''' tests. * Correct effect size: We identify correctly '''where and when''' the conditions are different. |
* Correct detection: Identify correctly '''where and when''' the conditions are different. |
Line 42: | Line 41: |
* '''|A - B| = 0''' * Tests the null hypothesis H0:(|A-B|=0) against the alternative hypothesis H1:(|A-B|>0) * Significance level obtained with '''one-sided''' tests (upper tail). * Correct effect size: We identify correctly '''where and when''' the conditions are different. * No sign: We cannot say which condition is stronger. * '''|A| = |B|''' * Tests the null hypothesis H0:(|A|=|B|) against the alternative hypothesis H1:(|A|<<HTML(≠)>>|B|) * Significance level obtained with '''two-sided''' tests. * Incorrect effect size: Doesn't detect correctly the effects when A and B have opposite signs. * Correct sign: We can identify correctly which condition has a '''stronger response'''. * |x| represents the modulus of the values: * Absolute value for scalar values (recordings, constrained sources, time-frequency maps) * Norm of the three orientations for unconstrained sources. |
* '''Power(A) = Power(B)''' * Tests the null hypothesis H0:(Power(A)=Power(B)) against the alternative hypothesis H1:(Power(A)<<HTML(≠)>>Power(B)) * Incorrect detection: Not sensitive to the cases where A and B have opposite signs. * Meaningful sign: We can identify correctly which condition has a '''stronger response'''. * Power(x) = |x|<<HTML(<SUP>2</SUP>)>>, where |x| represents the modulus of the values: <<BR>> - Absolute value for scalar values (recordings, constrained sources, time-frequency) <<BR>> - Norm of the three orientations for unconstrained sources. |
Line 71: | Line 62: |
== How many trials to include? == * '''Single subject''': Include all the good trials (unless you have a very low number of trials). <<BR>>See the [[http://neuroimage.usc.edu/brainstorm/Tutorials/Averaging#Number_of_trials|averaging tutorial]]. * '''Group analysis''': Use a similar numbers of trials for all the subjects (no need to be strictly equal), reject the subjects for which we have much less good trials. |
|
Line 73: | Line 68: |
* Average the epochs across sessions and subjects: OK. | * Average the epochs across acquisition runs: OK. * Average the epochs across subjects: OK. |
Line 76: | Line 72: |
* Group averages: Use the same number of trials for all the subjects. === Statistics: Within subject === |
=== Statistics: Single subject === |
Line 81: | Line 76: |
* Use as many trials as possible for A and B: No need to have an equal number of trials. === Statistics: Within subject === |
=== Statistics: Group analysis, within subject === |
Line 87: | Line 81: |
* Use the same number of trials for all the the averages. | |
Line 91: | Line 84: |
=== Statistics: Between groups === | === Statistics: Group analysis, between subjects === |
Line 94: | Line 87: |
* For each subject, compute the sensor average for conditions A and B. * Use the same number of trials for all the the averages. |
* For each subject, compute the sensor average for the condition to test. |
Line 101: | Line 93: |
* Average the epochs within each session: OK. * Averaging across sessions: Not advised because the head of the subject may move between runs. * Averaging across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects. |
* Average the epochs within each acquisition runs: OK. * Average across runs: Not advised because the head of the subject may move between runs. * Average across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects. |
Line 105: | Line 97: |
* Note for Elekta/MaxFilter users: You can align all sessions to a reference session, this will allow direct channel comparisons within-subject. Not recommended across subjects. | * Note for Elekta/MaxFilter users: You can align all acquisition run to a reference run, this will allow direct channel comparisons and averaging across runs. Not recommended across subjects. |
Line 107: | Line 99: |
* Group averages: Use the same number of trials for all the sessions. === Statistics: Within subject === |
=== Statistics: Single subject === |
Line 112: | Line 103: |
* Use as many trials as possible for A and B: No need to have an equal number of trials. === Statistics: Between subjects === |
=== Statistics: Group analysis === |
Line 117: | Line 107: |
=== Statistics: Between-groups === * Not recommended with MEG recordings: do your analysis in source space. |
|
Line 121: | Line 108: |
=== Average: Within subject === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition session and condition. <<BR>>Use the '''same number of trials''' for all the averages. |
=== Average: Single subject === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition run and per condition. |
Line 124: | Line 111: |
1. '''Source average''': Average the source-level session averages to get one subject average. 1. '''Low-pass filter''' your evoked responses (optional). 1. '''Normalize '''the subject min-norm averages: Z-score vs. baseline (no absolute value).<<BR>>Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values. |
1. '''Source average''': Average the source-level run averages to get one subject average. 1. '''Normalize '''the subject min-norm averages: Z-score wrt baseline (no absolute value).<<BR>>Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values. 1. '''Low-pass filter''' your evoked responses (optional). <<BR>>If you filter your data, do it after the noise normalization so the variance is not underestimated. |
Line 129: | Line 117: |
=== Average: Between subjects === | === Average: Group analysis === |
Line 133: | Line 121: |
1. '''Smooth '''spatially the sources.<<BR>>Justification: The effects observed with constrained cortical maps may be artificially very focal, not overlapping very well between subjects. Smoothing the cortical maps may help the activated regions overlap between subjects. |
|
Line 137: | Line 123: |
=== Average: Between groups === * Same as average between subjects. |
1. '''Smooth '''spatially the source maps (optional). === Difference of averages: Within subject === 1. '''Sensor average''': Compute one sensor-level average per acquisition run and condition. 1. '''Sources''': Estimate sources for each average (constrained, no normalization). 1. '''Source average''': Average the source-level session averages to get one subject average. 1. '''Subject difference''': Compute the difference between conditions for each subject #i: (Ai-Bi) 1. '''Normalize '''the difference: Z-score wrt baseline (no absolute value): Z(Ai-Bi) 1. '''Low-pass filter''' the difference (optional) 1. '''Rectify''' the difference (apply an absolute value): |Z(Ai-Bi)| 1. '''Project '''the individual difference on a template (only when using the individual brains). 1. '''Group average''': Compute grand averages of all the subjects: average_subjects(|Z(Ai-Bi)|). 1. '''Smooth '''spatially the source maps (optional). |
Line 142: | Line 140: |
1. '''Subject difference''': Compute the difference between conditions for each subject (A-B). 1. '''Rectify''' the difference of source maps (apply an absolute value). 1. '''Project '''the individual difference on a template (only when using the individual brains). 1. '''Smooth '''spatially the sources. 1. '''Group average''': Compute grand averages of all the subjects: average_subjects(|Ai-Bi|). === Difference of averages: Between groups === 1. '''Subject averages''': Compute within-subject averages for conditions A and B, as described above. 1. '''Grand averages''': Compute the group-level averages for groups #1 and #2 as described in "Average: Between subjects" |
1. '''Grand averages''': Compute the group-level averages for groups #1 and #2 as described in "Average: Group analysis" |
Line 154: | Line 144: |
=== Statistics: Within subject === | === Statistics: Single subject === |
Line 156: | Line 146: |
1. '''Statistics''': Compare all the trials of condition A vs all the trials of condition B.<<BR>>Use as many trials as possible for A and B: No need to have an equal number of trials. | 1. '''Statistics''': Compare all the trials of condition A vs all the trials of condition B. |
Line 161: | Line 151: |
1. '''|A| = |B|''' * '''Non-parametric''' tests only, '''independent''', test absolute values, two-tailed, FDR-corrected. * Indicates which condition corresponds to a stronger brain response (for a known effect). === Statistics: Between subjects === * '''|A - B| = 0''' : Parametric |
1. After identifying the significant effects, you may want to know which condition is stronger:<<BR>>Compute and plot power maps at the time points of interest: '''average(Ai^2^) - average(Bi^2^)''' === Statistics: Group analysis, within subject === * '''A = B''' : Parametric |
Line 170: | Line 158: |
* Use as many trials as possible for A and B: No need to have an equal number of trials. | |
Line 172: | Line 159: |
1. '''Low-pass filter''' your evoked responses (optional). | 1. '''Low-pass filter''' your evoked responses (optional). [NO! SHOULD BE DONE BEFORE, BUT WHEN ? => SPLIT ANALYSIS IN TWO: ERP OR FREQUENCY/RS] |
Line 180: | Line 167: |
* '''|A - B| = 0 ''': Non-parametric | 1. After identifying the significant effects, you may want to know which condition is stronger:<<BR>>Compute and plot power maps at the time points of interest: '''average(Ai^2^) - average(Bi^2^)''' * '''A = B ''': Non-parametric |
Line 185: | Line 173: |
1. '''Rectified subject averages''': Proceed as described in ''Average: Between subjects'', but stop before the grand average (#5). You obtain two averages per subject (A<<HTML(<SUB>)>>i<<HTML(</SUB>)>> and B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>). | 1. '''Rectified subject averages''': Proceed as described in ''Average: Group analysis'', but stop before the grand average (#5). You obtain two averages per subject (A<<HTML(<SUB>)>>i<<HTML(</SUB>)>> and B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>). |
Line 189: | Line 177: |
=== Statistics: Between groups === | === Statistics: Group analysis, between subjects === |
Line 196: | Line 184: |
* Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data within subject before comparing data between subjects. | * Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data at the subject level before comparing data between subjects. |
Line 202: | Line 190: |
=== Average: Within subject [???] === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition session and condition. <<BR>>Use the '''same number of trials''' for all the averages. |
=== Average: Single subject [???] === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition run and per condition. |
Line 205: | Line 193: |
1. '''Source average''': Average the source-level session averages to get one subject average. | 1. '''Source average''': Average the source-level run averages to get one subject average. |
Line 207: | Line 195: |
1. '''Normalize '''the subject min-norm averages: Z-score vs. baseline (no absolute value).<<BR>>[TODO] HOW TO NORMALIZE UNCONSTRAINED MAPS WRT BASELINE? === Average: Between subjects [???] === |
1. '''Normalize '''the subject min-norm averages: Z-score wrt baseline (no absolute value).<<BR>>'''[???]''' HOW TO NORMALIZE UNCONSTRAINED MAPS WRT BASELINE? === Average: Group analysis [???] === |
Line 216: | Line 204: |
=== Average: Between groups === * Same as average between subjects. === Difference of averages: Between subjects [???] === |
=== Difference of averages: Within subject [???] === |
Line 226: | Line 211: |
=== Difference of averages: Between groups [???] === | === Difference of averages: Between subjects [???] === |
Line 228: | Line 213: |
1. '''Grand averages''': Compute the group-level averages for groups #1 and #2 as described in "Average: Between subjects" | 1. '''Grand averages''': Compute the group-level averages for groups #1 and #2 as described in "Average: Group analysis" |
Line 232: | Line 217: |
=== Statistics: Within subject [???] === | === Statistics: Single subject [???] === |
Line 234: | Line 219: |
1. '''Statistics''': Compare all the trials of condition A vs all the trials of condition B.<<BR>>Use as many trials as possible for A and B: No need to have an equal number of trials. | 1. '''Statistics''': Compare all the trials of condition A vs all the trials of condition B. |
Line 239: | Line 224: |
=== Statistics: Between subjects [???] === | === Statistics: Group analysis, within subject [???] === |
Line 244: | Line 229: |
* Use as many trials as possible for A and B: No need to have an equal number of trials. | |
Line 263: | Line 247: |
=== Statistics: Between groups [???] === | === Statistics: Group analysis, between subjects [???] === |
Line 269: | Line 253: |
== Regions of interest (scouts) == |
== Regions of interest (scouts) [???] == === Statistics: Single subject === |
Line 274: | Line 257: |
=== Statistics: Within subject === * Average/constrast cortical maps before summarizing scout activity. |
|
Line 279: | Line 259: |
=== Statistics: Between subjects === | === Statistics: Group analysis, within subject === |
Line 283: | Line 263: |
=== Statistics: Within subject === * Test the non-normalized time-frequency maps for all the trials (no Z-score or ERS/ERD). * The values tested are power or magnitudes, all positive, so (A=B) and (|A|=|B|) are equivalent. * '''|A| vs |B|''': * Null hypothesis H0: (|A|=|B|) * '''Non-parametric''' tests only, independent, two-tailed, FDR-corrected. === Statistics: Between subjects === * '''(|A|-|B|=0)''': Non-parametric tests, two-tailed, FDR-corrected. === Statistics: Between groups === [TODO] |
=== Average: Single subject === 1. '''Time-frequency maps''': Compute time-frequency maps for each trial. * Apply the default measure: magnitude for Hilbert transform, power for Morlet wavelets. * Do not normalize the source maps: no Z-score or ERS/ERD. * The values are all strictly positive, there is no sign ambiguity as for recordings or sources. 1. '''Average''' all the time-frequency maps together, for each condition separately. * If you are averaging time-frequency maps computed on sensor-level data, the same limitations apply as for averaging sensor level data (see sections about MEG and EEG recordings above). === Average: Group analysis [???] === 1. '''Subject averages''': Compute within-subject averages for all the subjects, as described above. 1. '''Normalize''': '''[???]''' Zscore, ERD/ERS, or FieldTrip?<<BR>>Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values. 1. '''Group average''': Compute grand averages of all the subjects. === Difference of averages === 1. '''Group average''': Compute the averages for conditions A and B as in ''Average: Group analysis''. 1. '''Difference''': Compute the difference between group-level averages: avg(A)-avg(B). === Statistics: Single subject [???] === 1. '''Time-frequency maps''': Compute time-frequency maps for each trial. * Apply the default measure: magnitude for Hilbert transform, power for Morlet wavelets. * Do not normalize the source maps: no Z-score or ERS/ERD. * The values are all strictly positive, there is no sign ambiguity as for recordings or sources. 1. '''Statistics''': Compare all the trials of condition A vs all the trials of condition B. 1. '''A = B''' '''[???]''' * '''Parametric''' or '''non-parametric''' t-test, '''independent''', two-tailed, FDR-corrected. '''[???]''' * Indicates both where there is a significant effect and what is its direction (no sign ambiguity). === Statistics: Group analysis, within subject [???] === * '''A = B ''''''[???]''' 1. '''Subject averages''': Compute within-subject averages for all subjects, as described above. 1. '''Parametric''' or '''non-parametric''' t-test, '''independent''', two-tailed, FDR-corrected. '''[???]''' 1. Indicates both where there is a significant effect and what is its direction (no sign ambiguity). |
Line 301: | Line 304: |
* [Group analysis] Unconstrained sources: How to compute a Z-score? * Zscore(A): Normalizes each orientation separately, which doesn't make much sense. * Zscore(Norm(A)): Gets rid of the signs, forbids the option of a signed test H0:(Norm(A-B)=0) |
* [Group analysis] Unconstrained sources: How to normalize wrt baseline with a Z-score? * Zscore(A): Normalizes each orientation separately, we cannot take the norm of it after. * Zscore(|A|): Gets rid of the signs, forbids the option of a signed test H0:(Norm(A-B)=0) |
Line 306: | Line 309: |
* [Group analysis] Constrained sources: How do we smooth? * Group analysis benefits a lot from smoothing the source maps before computing statistics. * However this requires to apply an absolute value first. How do we do? |
|
Line 310: | Line 310: |
* Norm(A)-Norm(B): Cannot detect correctly the differences * (A-B): We test individually each orientation, which doesn't make much sense. * We would need a test for the three orientations at once. * [Group analysis] Rectify source maps? * Recommended in Dimitrios' guidelines, which is incoherent with the rest of the page. |
* |A|-|B|: Cannot detect correctly the difference. * |A-B|: Cannot be computed because the trials are not paired. * We need a test for the three orientations at the same time. * [Group analysis] Unconstrained sources:Can we use parametric tests? * Time-frequency maps: * Can we use parametric tests for (A-B=0) ? Does (A-B) ~ normal distribution? * Do we need to normalize the time-frequency maps when testing across subjects? * If yes, how to normalize the time-frequency maps? (Z-score, ERS/ERD, divide by std) |
Tutorial 27: Workflows
[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]
Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet
This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment.
We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.
Contents
What is your question?
The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings.
What dimension?
- MEG/EEG recordings
- Cortical sources
- Individual anatomy or template
- Constrained (one value per vertex) or unconstrained (three values per grid point)
- Full cortex or regions of interests
- Frequency or time-frequency maps
What kind of experiment?
Single subject: Contrast two experimental conditions across trials, for one single subject.
- Files A: Single trials for condition A.
- Files B: Single trials for condition B.
Group analysis, within subject: Contrast two conditions A and B measured for each subject.
- Files A: Subject-level averages for condition A (all the subjects).
- Files B: Subject-level averages for condition B (all the subjects).
Group analysis, between subjects: Contrast two groups of subjects for one condition.
- Files A: Subject-level averages for group #1.
- Files B: Subject-level averages for group #2.
What level of precision?
- Difference of averages
- Statistically significant differences between conditions or groups
What statistical test?
A = B
Tests the null hypothesis H0:(A=B) against the alternative hypothesis H1:(A≠B)
Correct detection: Identify correctly where and when the conditions are different.
- Ambiguous sign: We cannot say which condition is stronger.
Power(A) = Power(B)
Tests the null hypothesis H0:(Power(A)=Power(B)) against the alternative hypothesis H1:(Power(A)≠Power(B))
- Incorrect detection: Not sensitive to the cases where A and B have opposite signs.
Meaningful sign: We can identify correctly which condition has a stronger response.
Power(x) = |x|2, where |x| represents the modulus of the values:
- Absolute value for scalar values (recordings, constrained sources, time-frequency)
- Norm of the three orientations for unconstrained sources.
Common pre-processing pipeline
Most event-related studies can start with the pipeline we've introduced in these tutorials.
- Import the anatomy of the subject (or use a template for all the subjects).
- Access the recordings:
- Link the continuous recordings to the Brainstorm database.
- Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
- Edit the event markers: fix the delays of the triggers, mark additional events.
- Pre-process the signals:
- Evaluate the quality of the recordings with a power spectral density plot (PSD).
- Apply frequency filters (low-pass, high-pass, notch).
- Identify bad channels and bad segments.
- Correct for artifacts with SSP or ICA.
- Import the recordings in the database: epochs around some markers of interest.
How many trials to include?
Single subject: Include all the good trials (unless you have a very low number of trials).
See the averaging tutorial.Group analysis: Use a similar numbers of trials for all the subjects (no need to be strictly equal), reject the subjects for which we have much less good trials.
EEG recordings
Average
- Average the epochs across acquisition runs: OK.
- Average the epochs across subjects: OK.
- Electrodes are in the same standard positions for all the subjects (e.g. 10-20).
- Never use an absolute value for averaging or contrasting sensor-level data.
Statistics: Single subject
A = B
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
Statistics: Group analysis, within subject
A = B
First-level statistic: Average
- For each subject, compute the sensor average for conditions A and B.
Second-level statistic: t-test
Parametric or non-parametric t-test, paired, two-tailed, FDR-corrected.
Statistics: Group analysis, between subjects
A = B
First-level statistic: Average
- For each subject, compute the sensor average for the condition to test.
Second-level statistic: t-test
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
MEG recordings
Average
- Average the epochs within each acquisition runs: OK.
- Average across runs: Not advised because the head of the subject may move between runs.
- Average across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects.
- Tolerance for data exploration: Averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis.
- Note for Elekta/MaxFilter users: You can align all acquisition run to a reference run, this will allow direct channel comparisons and averaging across runs. Not recommended across subjects.
- Never use an absolute value for averaging or contrasting sensor-level data.
Statistics: Single subject
A = B
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
Statistics: Group analysis
- Not recommended with MEG recordings: do your analysis in source space.
Constrained cortical sources
Average: Single subject
Sensor average: Compute one sensor-level average per acquisition run and per condition.
Sources: Estimate sources for each average (constrained, no normalization).
Source average: Average the source-level run averages to get one subject average.
Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.Low-pass filter your evoked responses (optional).
If you filter your data, do it after the noise normalization so the variance is not underestimated.Do not rectify the cortical maps, but display them as absolute values if needed.
Average: Group analysis
Subject averages: Compute within-subject averages for all the subjects, as described above.
Rectify the cortical maps (apply an absolute value).
Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.Project the individual source maps on a template (only when using the individual brains).
For more details, see tutorial: Group analysis: Subject coregistration.Group average: Compute grand averages of all the subjects.
Smooth spatially the source maps (optional).
Difference of averages: Within subject
Sensor average: Compute one sensor-level average per acquisition run and condition.
Sources: Estimate sources for each average (constrained, no normalization).
Source average: Average the source-level session averages to get one subject average.
Subject difference: Compute the difference between conditions for each subject #i: (Ai-Bi)
Normalize the difference: Z-score wrt baseline (no absolute value): Z(Ai-Bi)
Low-pass filter the difference (optional)
Rectify the difference (apply an absolute value): |Z(Ai-Bi)|
Project the individual difference on a template (only when using the individual brains).
Group average: Compute grand averages of all the subjects: average_subjects(|Z(Ai-Bi)|).
Smooth spatially the source maps (optional).
Difference of averages: Between subjects
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Grand averages: Compute the group-level averages for groups #1 and #2 as described in "Average: Group analysis"
Difference: Compute the difference between group-level averages: avg(|A1|)-avg(|A2|)
Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.
Statistics: Single subject
Sources: Compute source maps for each trial (constrained, no normalization)
Statistics: Compare all the trials of condition A vs all the trials of condition B.
A = B
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
- Indicates when and where there is a significant effect (but not in which direction).
After identifying the significant effects, you may want to know which condition is stronger:
Compute and plot power maps at the time points of interest: average(Ai2) - average(Bi2)
Statistics: Group analysis, within subject
A = B : Parametric
Sources: Compute source maps for each trial (constrained, no normalization)
First-level statistic: Compute a t-statistic for the source maps of all the trials A vs B.
Process2 "Test > Compute t-statistic": no absolute values, independant, equal variance.
With a high number of trials (n>30), t-values follow approximately a N(0,1) distribution.
Low-pass filter your evoked responses (optional). [NO! SHOULD BE DONE BEFORE, BUT WHEN ? => SPLIT ANALYSIS IN TWO: ERP OR FREQUENCY/RS]
Rectify the individual t-statistic (we're giving up the sign across subjects).
Project the individual t-statistic on a template (only when using the individual brains).
Smooth spatially the t-statistic maps.
Second-level statistic: Compute a one-sampled chi-square test based on the t-statistics.
Process1: "Test > Parametric test against zero": One-sampled Chi-square test
This tests for |A-B|=0 using a Chi-square test: X = sum(|ti|^2) ~ Chi2(Nsubj)
- Indicates when and where there is a significant effect (but not in which direction).
After identifying the significant effects, you may want to know which condition is stronger:
Compute and plot power maps at the time points of interest: average(Ai2) - average(Bi2)
A = B : Non-parametric
Rectified differences: Proceed as described in Difference of averages: Between subjects, but stop before the computation of the grand averages (#6) and compute a test instead.
You obtain one |Ai-Bi| value for each subject, test these values against zero.Non-parametric one-sample test, one-tailed, FDR-corrected.
- Indicates when and where there is a significant effect (but not in which direction).
|A| = |B|
Rectified subject averages: Proceed as described in Average: Group analysis, but stop before the grand average (#5). You obtain two averages per subject (Ai and Bi).
Non-parametric two-sample test, paired, test absolute values, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Group analysis, between subjects
|A| = |B|
Subject averages: Compute within-subject averages for A and B, as described above.
You obtain two averages per subject (Ai and Bi).Non-parametric two-sample test, independent, test absolute values, two-tailed, FDR.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Design considerations
- Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data at the subject level before comparing data between subjects.
- Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject.
Unconstrained cortical sources
Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z).
We want only one statistic and one p-value per grid point in output.
Average: Single subject [???]
Sensor average: Compute one sensor-level average per acquisition run and per condition.
Sources: Estimate sources for each average (unconstrained, no normalization).
Source average: Average the source-level run averages to get one subject average.
Low-pass filter your evoked responses (optional).
Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
[???] HOW TO NORMALIZE UNCONSTRAINED MAPS WRT BASELINE?
Average: Group analysis [???]
Subject averages: Compute within-subject averages for all the subjects, as described above.
Flatten the cortical map: compute the norm of the three orientations at each grid point.
Project the individual source maps on a template (only when using the individual brains).
Group average: Compute grand averages of all the subjects.
Difference of averages: Within subject [???]
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Subject difference: Compute the difference between conditions for each subject (A-B).
Flatten the cortical map: compute the norm of the three orientations at each grid point.
Project the individual difference on a template.
Group average: Compute grand averages of all the subjects: average_subjects(|Ai-Bi|).
Difference of averages: Between subjects [???]
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Grand averages: Compute the group-level averages for groups #1 and #2 as described in "Average: Group analysis"
Difference: Compute the difference between group-level averages: avg(|A1|)-avg(|A2|)
Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.
Statistics: Single subject [???]
Sources: Compute source maps for each trial (unconstrained, no normalization)
Statistics: Compare all the trials of condition A vs all the trials of condition B.
|A| = |B|
Non-parametric tests only, independent, test norm, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Group analysis, within subject [???]
|A - B| = 0 : Parametric
Sources: Compute source maps for each trial (unconstrained, no normalization)
First-level statistic: Compute a t-statistic for the source maps of all the trials A vs B.
Process2 "Test > Compute t-statistic": no absolute values, independant, equal variance.
With a high number of trials (n>30), t-values follow approximately a N(0,1) distribution.
Low-pass filter your evoked responses (optional).
Rectify the individual t-statistic (we're giving up the sign across subjects).
Project the individual t-statistic on a template (when using ).
Smooth spatially the t-statistic maps.
Second-level statistic: Compute a one-sampled Chi-square test based on the t-statistics.
Process1: "Test > Parametric test against zero": One-sampled Chi-square test
This tests for |A-B|=0 using a Chi-square test: X = sum(|ti|^2) ~ Chi2(Nsubj)
- Indicates when and where there is a significant effect (but not in which direction).
|A - B| = 0 : Non-parametric
Rectified differences: Proceed as described in Difference of averages: Between subjects, but stop before the computation of the grand averages (#6) and compute a test instead.
You obtain one |Ai-Bi| value for each subject, test these values against zero.Non-parametric one-sample test, one-tailed, FDR-corrected.
- Indicates when and where there is a significant effect (but not in which direction).
|A| = |B|
Subject averages: Compute within-subject averages for A and B, as described above.
You obtain two averages per subject (Ai and Bi).Non-parametric two-sample test, paired, test absolute values, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Group analysis, between subjects [???]
|A| = |B|
Subject averages: Compute within-subject averages for A and B, as described above.
You obtain two averages per subject (Ai and Bi).Non-parametric two-sample test, independent, test absolute values, two-tailed, FDR.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Regions of interest (scouts) [???]
Statistics: Single subject
- Even within-subject cortical maps have sign ambiguities. MEG has limited spatial resolution and sources in opposing sulcal/gyral areas are reconstructed with inverted signs (constrained orientations only). Averaging activity in cortical regions of interest (scouts) would thus lead to signal cancelation. To avoid this brainstorm uses algorithms to manipulate the sign of individual sources before averaging within a cortical region. Unfortunately, this introduces an amplitude and sign ambiguity in the time course when summarizing scout activity.
As a result, perform any interesting within-subject average/contrast before computing an average scout time series.
- Then consider as constrained or unconstrained source maps.
Statistics: Group analysis, within subject
- Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases. Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns.
Time-frequency maps
Average: Single subject
Time-frequency maps: Compute time-frequency maps for each trial.
- Apply the default measure: magnitude for Hilbert transform, power for Morlet wavelets.
- Do not normalize the source maps: no Z-score or ERS/ERD.
- The values are all strictly positive, there is no sign ambiguity as for recordings or sources.
Average all the time-frequency maps together, for each condition separately.
- If you are averaging time-frequency maps computed on sensor-level data, the same limitations apply as for averaging sensor level data (see sections about MEG and EEG recordings above).
Average: Group analysis [???]
Subject averages: Compute within-subject averages for all the subjects, as described above.
Normalize: [???] Zscore, ERD/ERS, or FieldTrip?
Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.Group average: Compute grand averages of all the subjects.
Difference of averages
Group average: Compute the averages for conditions A and B as in Average: Group analysis.
Difference: Compute the difference between group-level averages: avg(A)-avg(B).
Statistics: Single subject [???]
Time-frequency maps: Compute time-frequency maps for each trial.
- Apply the default measure: magnitude for Hilbert transform, power for Morlet wavelets.
- Do not normalize the source maps: no Z-score or ERS/ERD.
- The values are all strictly positive, there is no sign ambiguity as for recordings or sources.
Statistics: Compare all the trials of condition A vs all the trials of condition B.
A = B [???]
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected. [???]
- Indicates both where there is a significant effect and what is its direction (no sign ambiguity).
Statistics: Group analysis, within subject [???]
A = B [???]
Subject averages: Compute within-subject averages for all subjects, as described above.
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected. [???]
- Indicates both where there is a significant effect and what is its direction (no sign ambiguity).
Workflow: Current problems [TODO]
The following inconsistencies are still present in the documentation. We are actively working on these issues and will update this tutorial as soon as we found solutions.
- [Group analysis] Unconstrained sources: How to normalize wrt baseline with a Z-score?
- Zscore(A): Normalizes each orientation separately, we cannot take the norm of it after.
- Zscore(|A|): Gets rid of the signs, forbids the option of a signed test H0:(Norm(A-B)=0)
See also the tutorial: Source estimation
- We need a way to normalize across the three orientations are the same time.
- [Single subject] Unconstrained sources: How do compare two conditions with multiple trials?
- |A|-|B|: Cannot detect correctly the difference.
- |A-B|: Cannot be computed because the trials are not paired.
- We need a test for the three orientations at the same time.
- [Group analysis] Unconstrained sources:Can we use parametric tests?
- Time-frequency maps:
- Can we use parametric tests for (A-B=0) ? Does (A-B) ~ normal distribution?
- Do we need to normalize the time-frequency maps when testing across subjects?
- If yes, how to normalize the time-frequency maps? (Z-score, ERS/ERD, divide by std)