16339
Comment:
|
23602
|
Deletions are marked like this. | Additions are marked like this. |
Line 6: | Line 6: |
This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment. We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section [[http://neuroimage.usc.edu/brainstorm/Tutorials#Other_analysis_scenarios|Other analysis scenarios]] of the tutorials page. <<TableOfContents(2,3)>> |
This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment. <<BR>>We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section [[http://neuroimage.usc.edu/brainstorm/Tutorials#Other_analysis_scenarios|Other analysis scenarios]] of the tutorials page. <<TableOfContents(3,2)>> |
Line 11: | Line 11: |
The most appropriate analysis pipeline for your data depends on the question you are trying to answer. What is the objective you have with your data? * Contrast two experimental conditions across trials, for '''one single subject''' (within subject) * Files A: Single trials for condition A. * Files B: Single trials for condition B. * Use independent tests, or difference of averages. * Contrast two groups of subjects for one given experimental condition (between subjects) * Files A: Averages for group of subjects #1. * Files B: Averages for group of subjects #2. * Use '''independent tests''', or difference of averages. * Contrast two experimental conditions across multiple subjects (between subjects) * Files A: All subjects, average for condition A. * Files B: All subjects, average for condition B. * Use '''paired tests''' (= dependent tests), or average of differences. What are the dimensions you want to explore? |
The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings. ==== What dimension? ==== |
Line 31: | Line 15: |
* Cortical sources: | * Cortical sources |
Line 35: | Line 19: |
* Time-frequency dimensions What level of precisions you want to get? * Averages / difference of averages * Identify statistically significant differences '''[TODO: WHEN TO USE WHAT]''' |
* Frequency or time-frequency maps ==== What kind of experiment? ==== * '''Within subject''': Contrast two experimental conditions across trials, for one single subject. * Files A: Single trials for condition A. * Files B: Single trials for condition B. * '''Between subjects''': Contrast two experimental conditions across multiple subjects. * Files A: Subject-level averages for condition A (all the subjects). * Files B: Subject-level averages for condition B (all the subjects). * '''Between groups''': Contrast two groups of subjects for one given experimental condition. * Files A: Subject-level averages for group #1. * Files B: Subject-level averages for group #2. ==== What level of precision? ==== * Difference of averages * Statistically significant differences between conditions or groups ==== What statistical test? ==== * '''A = B''' * Tests the null hypothesis H0:(A=B) against the alternative hypothesis H1:(A<<HTML(≠)>>B) * Significance level obtained with '''two-sided''' tests. * Correct effect size: We identify correctly '''where and when''' the conditions are different. * Ambiguous sign: We cannot say which condition is stronger. * '''|A - B| = 0''' * Tests the null hypothesis H0:(|A-B|=0) against the alternative hypothesis H1:(|A-B|>0) * Significance level obtained with '''one-sided''' tests (upper tail). * Correct effect size: We identify correctly '''where and when''' the conditions are different. * No sign: We cannot say which condition is stronger. * '''|A| = |B|''' * Tests the null hypothesis H0:(|A|=|B|) against the alternative hypothesis H1:(|A|<<HTML(≠)>>|B|) * Significance level obtained with '''two-sided''' tests. * Incorrect effect size: Doesn't detect correctly the effects when A and B have opposite signs. * Correct sign: We can identify correctly which condition has a '''stronger response'''. * |x| represents the modulus of the values: * Absolute value for scalar values (recordings, constrained sources, time-frequency maps) * Norm of the three orientations for unconstrained sources. |
Line 45: | Line 57: |
All the event-related studies can start with the pipeline we've introduced in these beginners' tutorials. | Most event-related studies can start with the pipeline we've introduced in these tutorials. |
Line 53: | Line 65: |
* Evaluate the quality of the recordings with a power spectrum density (PSD). | * Evaluate the quality of the recordings with a power spectral density plot (PSD). |
Line 60: | Line 72: |
==== Average ==== | === Average === |
Line 63: | Line 75: |
==== Within subject statistics ==== * '''A vs'''''' B''': * Never use an absolute value for testing recordings. * Parametric or non-parametric tests, independent, two-tailed, FDR-corrected. * Correct effect size (we identify correctly where and when the conditions are different). * Ambiguous sign (we cannot say which condition is stronger). ==== Between subjects statistics ==== * '''(A-B=0)''': Parametric or non-parametric tests, two-tailed, FDR-corrected. |
* Never use an absolute value for averaging or contrasting sensor-level data. * Group averages: Use the same number of trials for all the subjects. === Statistics: Within subject === * '''A ='''''' B''' * '''Parametric '''or '''non-parametric''' t-test, '''independent''', two-tailed, FDR-corrected. * Use as many trials as possible for A and B: No need to have an equal number of trials. === Statistics: Within subject === * '''A ='''''' B''' * '''First-level statistic''': Average * For each subject, compute the sensor average for conditions A and B. * Use the same number of trials for all the the averages. * '''Second-level statistic''': t-test * '''Parametric''' or '''non-parametric''' t-test, '''paired''', two-tailed, FDR-corrected. === Statistics: Between groups === * '''A ='''''' B''' * '''First-level statistic''': Average * For each subject, compute the sensor average for conditions A and B. * Use the same number of trials for all the the averages. * '''Second-level statistic''': t-test * '''Parametric''' or '''non-parametric''' t-test, '''independent''', two-tailed, FDR-corrected. |
Line 75: | Line 100: |
==== Average ==== | === Average === |
Line 78: | Line 103: |
* Averaging across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. * Tolerance for data exploration: averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis. |
* Averaging across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects. * Tolerance for data exploration: Averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis. |
Line 81: | Line 106: |
==== Within subject statistics ==== * '''A = ''''''B''' * Never use an absolute value for testing recordings. * Parametric or non-parametric tests, independent, two-tailed, FDR-corrected. * Correct effect size (we identify correctly where and when the conditions are different). * Ambiguous sign (we cannot say which condition is stronger). ==== Between subjects statistics ==== |
* Never use an absolute value for averaging or contrasting sensor-level data. * Group averages: Use the same number of trials for all the sessions. === Statistics: Within subject === * '''A ='''''' B''' * '''Parametric '''or '''non-parametric''' t-test, '''independent''', two-tailed, FDR-corrected. * Use as many trials as possible for A and B: No need to have an equal number of trials. === Statistics: Between subjects === |
Line 92: | Line 117: |
=== Statistics: Between-groups === * Not recommended with MEG recordings: do your analysis in source space. |
|
Line 93: | Line 121: |
==== Within-subject average ==== 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition session and condition (MEG/EEG). <<BR>>Use the '''same number of trials''' for all the averages. 1. '''Sources''': Estimate sources for each session average (constrained or unconstrained, no normalization). |
=== Average: Within subject === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition session and condition. <<BR>>Use the '''same number of trials''' for all the averages. 1. '''Sources''': Estimate sources for each average (constrained, no normalization). |
Line 97: | Line 125: |
1. <<HTML(<FONT color="#777777">)>>'''Low-pass filter''' < 40Hz for evoked responses (optional)<<HTML(</FONT>)>> 1. '''Normalize '''the subject min-norm averages: Z-score vs. baseline (no absolute value). 1. '''Do not rectify the cortical maps''', but display them in absolute values. ==== Between-subjects average ==== 1. '''Subject average'''s: Compute the within-subject averages for all the subjects, as described just above. |
1. '''Low-pass filter''' your evoked responses (optional). 1. '''Normalize '''the subject min-norm averages: Z-score vs. baseline (no absolute value).<<BR>>Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values. 1. '''Do not rectify the cortical maps''', but display them as absolute values if needed. === Average: Between subjects === 1. '''Subject averages''': Compute within-subject averages for all the subjects, as described above. |
Line 104: | Line 132: |
1. '''Project '''the individual source maps on a template. 1. '''Smooth '''spatially the sources.<<BR>>Justification: The effects observed with constrained cortical maps may be artifically very focal and not overlapping very well between subjects. Smoothing the cortical maps may help the activated regions overlap between subjects. |
1. '''Project '''the individual source maps on a template (only when using the individual brains). <<BR>> For more details, see tutorial: [[Tutorials/CoregisterSubjects|Group analysis: Subject coregistration]]. 1. '''Smooth '''spatially the sources.<<BR>>Justification: The effects observed with constrained cortical maps may be artificially very focal, not overlapping very well between subjects. Smoothing the cortical maps may help the activated regions overlap between subjects. |
Line 108: | Line 137: |
==== Between-subject difference of average ==== 1. '''Subject averages''': For each subject, compute the within-subject averages for conditions A and B, as described just above. |
=== Average: Between groups === * Same as average between subjects. === Difference of averages: Between subjects === 1. '''Subject averages''': Compute within-subject averages for conditions A and B, as described above. |
Line 112: | Line 144: |
1. '''Project '''the individual difference on a template (only when using the individual brains). 1. '''Smooth '''spatially the sources. 1. '''Group average''': Compute grand averages of all the subjects: average_subjects(|Ai-Bi|). === Difference of averages: Between groups === 1. '''Subject averages''': Compute within-subject averages for conditions A and B, as described above. 1. '''Grand averages''': Compute the group-level averages for groups #1 and #2 as described in "Average: Between subjects" 1. '''Difference''': Compute the difference between group-level averages: avg(|A1|)-avg(|A2|) 1. '''Limitations''': Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly. === Statistics: Within subject === 1. '''Sources''': Compute source maps for each trial (constrained, no normalization) 1. '''Statistics''': Compare all the trials of condition A vs all the trials of condition B.<<BR>>Use as many trials as possible for A and B: No need to have an equal number of trials. 1. '''A = B''' * '''Parametric''' or '''non-parametric''' t-test, '''independent''', two-tailed, FDR-corrected. * Indicates when and where there is a significant effect (but not in which direction). 1. '''|A| = |B|''' * '''Non-parametric''' tests only, '''independent''', test absolute values, two-tailed, FDR-corrected. * Indicates which condition corresponds to a stronger brain response (for a known effect). === Statistics: Between subjects === * '''|A - B| = 0''' : Parametric 1. '''Sources''': Compute source maps for each trial (constrained, no normalization) 1. '''First-level statistic''': Compute a t-statistic for the source maps of all the trials A vs B. * Process2 "Test > Compute t-statistic": no absolute values, independant, equal variance. * Use as many trials as possible for A and B: No need to have an equal number of trials. * With a high number of trials (n>30), t-values follow approximately a N(0,1) distribution. 1. '''Low-pass filter''' your evoked responses (optional). 1. '''Rectify '''the individual t-statistic (we're giving up the sign across subjects). 1. '''Project '''the individual t-statistic on a template (only when using the individual brains). 1. '''Smooth '''spatially the t-statistic maps. 1. '''Second-level statistic''': Compute a one-sampled chi-square test based on the t-statistics. * Process1: "Test > Parametric test against zero": One-sampled Chi-square test * This tests for '''|A-B|'''=0 using a Chi-square test: X = sum(|t<<HTML(<SUB>)>>i<<HTML(</SUB>)>>|^2) ~ Chi2(N<<HTML(<SUB>)>>subj<<HTML(</SUB>)>>) * Indicates when and where there is a significant effect (but not in which direction). * '''|A - B| = 0 ''': Non-parametric 1. '''Rectified differences''': Proceed as described in ''Difference of averages: Between subjects'', but stop before the computation of the grand averages (#6) and compute a test instead.<<BR>>You obtain one |A<<HTML(<SUB>)>>i<<HTML(</SUB>)>>-B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>| value for each subject, test these values against zero. 1. '''Non-parametric''' one-sample test, one-tailed, FDR-corrected. 1. Indicates when and where there is a significant effect (but not in which direction). * '''|A| = |B|''' 1. '''Rectified subject averages''': Proceed as described in ''Average: Between subjects'', but stop before the grand average (#5). You obtain two averages per subject (A<<HTML(<SUB>)>>i<<HTML(</SUB>)>> and B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>). 1. '''Non-parametric''' two-sample test, '''paired''', test absolute values, two-tailed, FDR-corrected. 1. Indicates which condition corresponds to a stronger brain response (for a known effect). === Statistics: Between groups === * '''|A| = |B|''' * '''Subject averages''': Compute within-subject averages for A and B, as described above.<<BR>>You obtain two averages per subject (A<<HTML(<SUB>)>>i<<HTML(</SUB>)>> and B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>). * '''Non-parametric''' two-sample test, '''independent''', test absolute values, two-tailed, FDR. * Indicates which condition corresponds to a stronger brain response (for a known effect). === Design considerations === * Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data within subject before comparing data between subjects. * Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject. == Unconstrained cortical sources == Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z). <<BR>>We want only one statistic and one p-value per grid point in output. === Average: Within subject [???] === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition session and condition. <<BR>>Use the '''same number of trials''' for all the averages. 1. '''Sources''': Estimate sources for each average (unconstrained, no normalization). 1. '''Source average''': Average the source-level session averages to get one subject average. 1. '''Low-pass filter''' your evoked responses (optional). 1. '''Normalize '''the subject min-norm averages: Z-score vs. baseline (no absolute value).<<BR>>[TODO] HOW TO NORMALIZE UNCONSTRAINED MAPS WRT BASELINE? === Average: Between subjects [???] === 1. '''Subject averages''': Compute within-subject averages for all the subjects, as described above. 1. '''Flatten''' the cortical map: compute the norm of the three orientations at each grid point. 1. '''Project '''the individual source maps on a template (only when using the individual brains). 1. '''Group average''': Compute grand averages of all the subjects. === Average: Between groups === * Same as average between subjects. === Difference of averages: Between subjects [???] === 1. '''Subject averages''': Compute within-subject averages for conditions A and B, as described above. 1. '''Subject difference''': Compute the difference between conditions for each subject (A-B). 1. '''Flatten''' the cortical map: compute the norm of the three orientations at each grid point. |
|
Line 113: | Line 224: |
1. '''Smooth '''spatially the sources. 1. '''Group average''': Compute grand averages of all the subjects. ==== Within-subject statistics ==== 1. '''Sources''': Compute source maps for each trial (constrained or unconstrained, no normalization) |
1. '''Group average''': Compute grand averages of all the subjects: average_subjects(|Ai-Bi|). === Difference of averages: Between groups [???] === 1. '''Subject averages''': Compute within-subject averages for conditions A and B, as described above. 1. '''Grand averages''': Compute the group-level averages for groups #1 and #2 as described in "Average: Between subjects" 1. '''Difference''': Compute the difference between group-level averages: avg(|A1|)-avg(|A2|) 1. '''Limitations''': Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly. === Statistics: Within subject [???] === 1. '''Sources''': Compute source maps for each trial (unconstrained, no normalization) |
Line 119: | Line 235: |
1. '''A = B''' * '''Parametric''' or '''non-parametric''' tests, independent, two-tailed, FDR-corrected. * Correct effect size: We identify correctly where and when the conditions are different. * Ambiguous sign: We cannot say which condition has the stronger response. |
|
Line 125: | Line 236: |
* '''Non-parametric''' tests only, independent, two-tailed, FDR-corrected. * Incorrect effect size: Doesn't detect correctly the effects when A and B have opposite signs. * Correct sign: We can identify correctly which condition has a stronger response. ==== Between-subject statistics ==== 1. '''Sources''': Compute source maps for each trial (constrained or unconstrained, no normalization) 1. '''First-level statistic''': Compute a t-statistic for the source maps of all the trials A vs. all the trials B. * Process2: "Test > Parametric test: Independent": t-test with equal variance * Use as many trials as possible for A and B: No need to have an equal number of trials. * With a relatively high number of trials, we can consider the t-values follow a Z-distribution. 1. '''Second-level statistic''': Compute a one-sampled power test based on the subject t-statistic. * Process1: "Test > Parametric test against zero": One-sampled Chi-square test * This tests for '''|A-B|'''=0 using a power test: X = sum(|ti|^2) ~ Chi-square distribution * Correct effect size, no sign (cannot detect which condition has the strongest response). ==== Design consideration ==== * Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data within subject before comparing data between subjects. Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject. |
* '''Non-parametric''' tests only, '''independent''', test norm, two-tailed, FDR-corrected. * Indicates which condition corresponds to a stronger brain response (for a known effect). === Statistics: Between subjects [???] === * '''|A - B| = 0''' : Parametric 1. '''Sources''': Compute source maps for each trial (unconstrained, no normalization) 1. '''First-level statistic''': Compute a t-statistic for the source maps of all the trials A vs B. * Process2 "Test > Compute t-statistic": no absolute values, independant, equal variance. * Use as many trials as possible for A and B: No need to have an equal number of trials. * With a high number of trials (n>30), t-values follow approximately a N(0,1) distribution. 1. '''Low-pass filter''' your evoked responses (optional). 1. '''Rectify '''the individual t-statistic (we're giving up the sign across subjects). 1. '''Project '''the individual t-statistic on a template (when using ). 1. '''Smooth '''spatially the t-statistic maps. 1. '''Second-level statistic''': Compute a one-sampled Chi-square test based on the t-statistics. * Process1: "Test > Parametric test against zero": One-sampled Chi-square test * This tests for '''|A-B|'''=0 using a Chi-square test: X = sum(|t<<HTML(<SUB>)>>i<<HTML(</SUB>)>>|^2) ~ Chi2(N<<HTML(<SUB>)>>subj<<HTML(</SUB>)>>) * Indicates when and where there is a significant effect (but not in which direction). * '''|A - B| = 0 ''': Non-parametric 1. '''Rectified differences''': Proceed as described in ''Difference of averages: Between subjects'', but stop before the computation of the grand averages (#6) and compute a test instead.<<BR>>You obtain one |A<<HTML(<SUB>)>>i<<HTML(</SUB>)>>-B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>| value for each subject, test these values against zero. 1. '''Non-parametric''' one-sample test, one-tailed, FDR-corrected. 1. Indicates when and where there is a significant effect (but not in which direction). * '''|A| = |B|''' 1. '''Subject averages''': Compute within-subject averages for A and B, as described above.<<BR>>You obtain two averages per subject (A<<HTML(<SUB>)>>i<<HTML(</SUB>)>> and B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>). 1. '''Non-parametric''' two-sample test, '''paired''', test absolute values, two-tailed, FDR-corrected. 1. Indicates which condition corresponds to a stronger brain response (for a known effect). === Statistics: Between groups [???] === * '''|A| = |B|''' * '''Subject averages''': Compute within-subject averages for A and B, as described above.<<BR>>You obtain two averages per subject (A<<HTML(<SUB>)>>i<<HTML(</SUB>)>> and B<<HTML(<SUB>)>>i<<HTML(</SUB>)>>). * '''Non-parametric''' two-sample test, '''independent''', test absolute values, two-tailed, FDR. * Indicates which condition corresponds to a stronger brain response (for a known effect). |
Line 147: | Line 275: |
== Within-subject statistics == For one unique subject, test for significant differences between two experimental conditions: * Compare the '''single trials''' corresponding to each condition. * In most cases, you '''do not need to normalize''' the data. * For help with the implications of testing the '''relative or absolute values''', see: [[Tutorials/Difference|Difference]]. ==== Unconstrained source maps ==== * Three values per vertex. * Use the non-normalized minimum norm maps for all the trials (current density maps, no Z-score). * We need to test the '''norm '''of the three orientations instead of testing the orientations separately. * '''Norm(A) vs. Norm(B)''': * Null hypothesis H0: (|A|=|B|). * '''Non-parametric''' tests only, independent, two-tailed, FDR-corrected. * Incorrect effect size, meaningful sign. ==== Regions of interest (scouts) ==== |
=== Statistics: Within subject === |
Line 167: | Line 279: |
==== Time-frequency maps ==== | === Statistics: Between subjects === * Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases. Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns. == Time-frequency maps == === Statistics: Within subject === |
Line 173: | Line 289: |
* Correct effect size, meaningful sign. == Between-subject statistics [TODO] == ==== Subject averages ==== You need first to process the data separately for each subject: 1. Compute the ''' subject-level averages''', using the '''same number of trials''' for each subject.<<BR>> Sources: Average the non-normalized minimum norm maps (current density maps, no Z-score). 1. Sources and time-frequency: '''Normalize '''the data to bring the different subjects to the same range of values (Z-score normalization with respect to a baseline - never apply an absolute value here). 1. Sources computed on individual brains: '''Project '''the individual source maps on a template (see the [[Tutorials/CoregisterSubjects|coregistration tutorial]]). Not needed if the sources were estimated directly on the template anatomy. <<BR>>Note: We evaluated the alternative order (project the sources and then normalize): it doesn't seem to be making a significant difference. It's more practical then to normalize at the subject level before projecting the sources on the template, so that we have normalized maps to look at for each subject in the database. 1. Constrained sources: '''Smooth '''spatially the sources, to make sure the brain responses are aligned. '''Problem''': This is only possible after applying an absolute value, smoothing in relative values do not make sense, as the positive and negative signals and the two sides of a sulcus would cancel out. [TODO] ==== Paired tests ==== * '''Constrained source maps''' (one value per vertex): * '''(A-B=0)''': Parametric or non-parametric tests, two-tailed, FDR-corrected ('''sign issue?'''). * '''(|A|-|B|=0)''': Non-parametric tests, two-tailed, FDR-corrected. * '''(|A-B| = 0)''': ??? * '''Unconstrained source maps''' (three values per vertex): * '''(Norm(A-B)=0)''': Non-parametric tests, __'''one-tailed'''__ (non-negative statistic), FDR-corrected. * '''(Norm(A)-Norm(B)=0)''': Non-parametric tests, two-tailed, FDR-corrected. * '''Time-frequency maps''': * '''(|A|-|B|=0)''': Non-parametric tests, two-tailed, FDR-corrected. * '''Regions of interest''' (scouts): * Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases. Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns. * For help with '''relative/absolute options''', read the previous tutorial: [[Tutorials/Difference|Difference]]. ==== Averages ==== * In order to compute grand averages (across subjects), you should '''rectify''' your source maps before averaging. Averaging the absolute values of the subject-level averages will help avoiding possible cancellation effects due to anatomical differences between subjects. * If you have two conditions A and B to contrast, first compute the difference within-subject (A-B), then average the rectified differences: average_subjects(|Ai-Bi|). |
=== Statistics: Between subjects === * '''(|A|-|B|=0)''': Non-parametric tests, two-tailed, FDR-corrected. === Statistics: Between groups === [TODO] |
Line 216: | Line 305: |
* We would need a way to normalize across the three orientations are the same time. | * We need a way to normalize across the three orientations are the same time. |
Tutorial 27: Workflows
[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE]
Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet
This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment.
We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.
Contents
What is your question?
The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings.
What dimension?
- MEG/EEG recordings
- Cortical sources
- Individual anatomy or template
- Constrained (one value per vertex) or unconstrained (three values per grid point)
- Full cortex or regions of interests
- Frequency or time-frequency maps
What kind of experiment?
Within subject: Contrast two experimental conditions across trials, for one single subject.
- Files A: Single trials for condition A.
- Files B: Single trials for condition B.
Between subjects: Contrast two experimental conditions across multiple subjects.
- Files A: Subject-level averages for condition A (all the subjects).
- Files B: Subject-level averages for condition B (all the subjects).
Between groups: Contrast two groups of subjects for one given experimental condition.
- Files A: Subject-level averages for group #1.
- Files B: Subject-level averages for group #2.
What level of precision?
- Difference of averages
- Statistically significant differences between conditions or groups
What statistical test?
A = B
Tests the null hypothesis H0:(A=B) against the alternative hypothesis H1:(A≠B)
Significance level obtained with two-sided tests.
Correct effect size: We identify correctly where and when the conditions are different.
- Ambiguous sign: We cannot say which condition is stronger.
|A - B| = 0
Tests the null hypothesis H0:(|A-B|=0) against the alternative hypothesis H1:(|A-B|>0)
Significance level obtained with one-sided tests (upper tail).
Correct effect size: We identify correctly where and when the conditions are different.
- No sign: We cannot say which condition is stronger.
|A| = |B|
Tests the null hypothesis H0:(|A|=|B|) against the alternative hypothesis H1:(|A|≠|B|)
Significance level obtained with two-sided tests.
- Incorrect effect size: Doesn't detect correctly the effects when A and B have opposite signs.
Correct sign: We can identify correctly which condition has a stronger response.
- |x| represents the modulus of the values:
- Absolute value for scalar values (recordings, constrained sources, time-frequency maps)
- Norm of the three orientations for unconstrained sources.
Common pre-processing pipeline
Most event-related studies can start with the pipeline we've introduced in these tutorials.
- Import the anatomy of the subject (or use a template for all the subjects).
- Access the recordings:
- Link the continuous recordings to the Brainstorm database.
- Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
- Edit the event markers: fix the delays of the triggers, mark additional events.
- Pre-process the signals:
- Evaluate the quality of the recordings with a power spectral density plot (PSD).
- Apply frequency filters (low-pass, high-pass, notch).
- Identify bad channels and bad segments.
- Correct for artifacts with SSP or ICA.
- Import the recordings in the database: epochs around some markers of interest.
EEG recordings
Average
- Average the epochs across sessions and subjects: OK.
- Electrodes are in the same standard positions for all the subjects (e.g. 10-20).
- Never use an absolute value for averaging or contrasting sensor-level data.
- Group averages: Use the same number of trials for all the subjects.
Statistics: Within subject
A = B
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
- Use as many trials as possible for A and B: No need to have an equal number of trials.
Statistics: Within subject
A = B
First-level statistic: Average
- For each subject, compute the sensor average for conditions A and B.
- Use the same number of trials for all the the averages.
Second-level statistic: t-test
Parametric or non-parametric t-test, paired, two-tailed, FDR-corrected.
Statistics: Between groups
A = B
First-level statistic: Average
- For each subject, compute the sensor average for conditions A and B.
- Use the same number of trials for all the the averages.
Second-level statistic: t-test
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
MEG recordings
Average
- Average the epochs within each session: OK.
- Averaging across sessions: Not advised because the head of the subject may move between runs.
- Averaging across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects.
- Tolerance for data exploration: Averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis.
- Note for Elekta/MaxFilter users: You can align all sessions to a reference session, this will allow direct channel comparisons within-subject. Not recommended across subjects.
- Never use an absolute value for averaging or contrasting sensor-level data.
- Group averages: Use the same number of trials for all the sessions.
Statistics: Within subject
A = B
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
- Use as many trials as possible for A and B: No need to have an equal number of trials.
Statistics: Between subjects
- Not recommended with MEG recordings: do your analysis in source space.
Statistics: Between-groups
- Not recommended with MEG recordings: do your analysis in source space.
Constrained cortical sources
Average: Within subject
Sensor average: Compute one sensor-level average per acquisition session and condition.
Use the same number of trials for all the averages.Sources: Estimate sources for each average (constrained, no normalization).
Source average: Average the source-level session averages to get one subject average.
Low-pass filter your evoked responses (optional).
Normalize the subject min-norm averages: Z-score vs. baseline (no absolute value).
Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.Do not rectify the cortical maps, but display them as absolute values if needed.
Average: Between subjects
Subject averages: Compute within-subject averages for all the subjects, as described above.
Rectify the cortical maps (apply an absolute value).
Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.Project the individual source maps on a template (only when using the individual brains).
For more details, see tutorial: Group analysis: Subject coregistration.Smooth spatially the sources.
Justification: The effects observed with constrained cortical maps may be artificially very focal, not overlapping very well between subjects. Smoothing the cortical maps may help the activated regions overlap between subjects.Group average: Compute grand averages of all the subjects.
Average: Between groups
- Same as average between subjects.
Difference of averages: Between subjects
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Subject difference: Compute the difference between conditions for each subject (A-B).
Rectify the difference of source maps (apply an absolute value).
Project the individual difference on a template (only when using the individual brains).
Smooth spatially the sources.
Group average: Compute grand averages of all the subjects: average_subjects(|Ai-Bi|).
Difference of averages: Between groups
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Grand averages: Compute the group-level averages for groups #1 and #2 as described in "Average: Between subjects"
Difference: Compute the difference between group-level averages: avg(|A1|)-avg(|A2|)
Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.
Statistics: Within subject
Sources: Compute source maps for each trial (constrained, no normalization)
Statistics: Compare all the trials of condition A vs all the trials of condition B.
Use as many trials as possible for A and B: No need to have an equal number of trials.A = B
Parametric or non-parametric t-test, independent, two-tailed, FDR-corrected.
- Indicates when and where there is a significant effect (but not in which direction).
|A| = |B|
Non-parametric tests only, independent, test absolute values, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Between subjects
|A - B| = 0 : Parametric
Sources: Compute source maps for each trial (constrained, no normalization)
First-level statistic: Compute a t-statistic for the source maps of all the trials A vs B.
Process2 "Test > Compute t-statistic": no absolute values, independant, equal variance.
- Use as many trials as possible for A and B: No need to have an equal number of trials.
With a high number of trials (n>30), t-values follow approximately a N(0,1) distribution.
Low-pass filter your evoked responses (optional).
Rectify the individual t-statistic (we're giving up the sign across subjects).
Project the individual t-statistic on a template (only when using the individual brains).
Smooth spatially the t-statistic maps.
Second-level statistic: Compute a one-sampled chi-square test based on the t-statistics.
Process1: "Test > Parametric test against zero": One-sampled Chi-square test
This tests for |A-B|=0 using a Chi-square test: X = sum(|ti|^2) ~ Chi2(Nsubj)
- Indicates when and where there is a significant effect (but not in which direction).
|A - B| = 0 : Non-parametric
Rectified differences: Proceed as described in Difference of averages: Between subjects, but stop before the computation of the grand averages (#6) and compute a test instead.
You obtain one |Ai-Bi| value for each subject, test these values against zero.Non-parametric one-sample test, one-tailed, FDR-corrected.
- Indicates when and where there is a significant effect (but not in which direction).
|A| = |B|
Rectified subject averages: Proceed as described in Average: Between subjects, but stop before the grand average (#5). You obtain two averages per subject (Ai and Bi).
Non-parametric two-sample test, paired, test absolute values, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Between groups
|A| = |B|
Subject averages: Compute within-subject averages for A and B, as described above.
You obtain two averages per subject (Ai and Bi).Non-parametric two-sample test, independent, test absolute values, two-tailed, FDR.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Design considerations
- Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data within subject before comparing data between subjects.
- Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject.
Unconstrained cortical sources
Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z).
We want only one statistic and one p-value per grid point in output.
Average: Within subject [???]
Sensor average: Compute one sensor-level average per acquisition session and condition.
Use the same number of trials for all the averages.Sources: Estimate sources for each average (unconstrained, no normalization).
Source average: Average the source-level session averages to get one subject average.
Low-pass filter your evoked responses (optional).
Normalize the subject min-norm averages: Z-score vs. baseline (no absolute value).
[TODO] HOW TO NORMALIZE UNCONSTRAINED MAPS WRT BASELINE?
Average: Between subjects [???]
Subject averages: Compute within-subject averages for all the subjects, as described above.
Flatten the cortical map: compute the norm of the three orientations at each grid point.
Project the individual source maps on a template (only when using the individual brains).
Group average: Compute grand averages of all the subjects.
Average: Between groups
- Same as average between subjects.
Difference of averages: Between subjects [???]
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Subject difference: Compute the difference between conditions for each subject (A-B).
Flatten the cortical map: compute the norm of the three orientations at each grid point.
Project the individual difference on a template.
Group average: Compute grand averages of all the subjects: average_subjects(|Ai-Bi|).
Difference of averages: Between groups [???]
Subject averages: Compute within-subject averages for conditions A and B, as described above.
Grand averages: Compute the group-level averages for groups #1 and #2 as described in "Average: Between subjects"
Difference: Compute the difference between group-level averages: avg(|A1|)-avg(|A2|)
Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.
Statistics: Within subject [???]
Sources: Compute source maps for each trial (unconstrained, no normalization)
Statistics: Compare all the trials of condition A vs all the trials of condition B.
Use as many trials as possible for A and B: No need to have an equal number of trials.|A| = |B|
Non-parametric tests only, independent, test norm, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Between subjects [???]
|A - B| = 0 : Parametric
Sources: Compute source maps for each trial (unconstrained, no normalization)
First-level statistic: Compute a t-statistic for the source maps of all the trials A vs B.
Process2 "Test > Compute t-statistic": no absolute values, independant, equal variance.
- Use as many trials as possible for A and B: No need to have an equal number of trials.
With a high number of trials (n>30), t-values follow approximately a N(0,1) distribution.
Low-pass filter your evoked responses (optional).
Rectify the individual t-statistic (we're giving up the sign across subjects).
Project the individual t-statistic on a template (when using ).
Smooth spatially the t-statistic maps.
Second-level statistic: Compute a one-sampled Chi-square test based on the t-statistics.
Process1: "Test > Parametric test against zero": One-sampled Chi-square test
This tests for |A-B|=0 using a Chi-square test: X = sum(|ti|^2) ~ Chi2(Nsubj)
- Indicates when and where there is a significant effect (but not in which direction).
|A - B| = 0 : Non-parametric
Rectified differences: Proceed as described in Difference of averages: Between subjects, but stop before the computation of the grand averages (#6) and compute a test instead.
You obtain one |Ai-Bi| value for each subject, test these values against zero.Non-parametric one-sample test, one-tailed, FDR-corrected.
- Indicates when and where there is a significant effect (but not in which direction).
|A| = |B|
Subject averages: Compute within-subject averages for A and B, as described above.
You obtain two averages per subject (Ai and Bi).Non-parametric two-sample test, paired, test absolute values, two-tailed, FDR-corrected.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Statistics: Between groups [???]
|A| = |B|
Subject averages: Compute within-subject averages for A and B, as described above.
You obtain two averages per subject (Ai and Bi).Non-parametric two-sample test, independent, test absolute values, two-tailed, FDR.
- Indicates which condition corresponds to a stronger brain response (for a known effect).
Regions of interest (scouts)
- Even within-subject cortical maps have sign ambiguities. MEG has limited spatial resolution and sources in opposing sulcal/gyral areas are reconstructed with inverted signs (constrained orientations only). Averaging activity in cortical regions of interest (scouts) would thus lead to signal cancelation. To avoid this brainstorm uses algorithms to manipulate the sign of individual sources before averaging within a cortical region. Unfortunately, this introduces an amplitude and sign ambiguity in the time course when summarizing scout activity.
As a result, perform any interesting within-subject average/contrast before computing an average scout time series.
Statistics: Within subject
- Average/constrast cortical maps before summarizing scout activity.
- Then consider as constrained or unconstrained source maps.
Statistics: Between subjects
- Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases. Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns.
Time-frequency maps
Statistics: Within subject
- Test the non-normalized time-frequency maps for all the trials (no Z-score or ERS/ERD).
- The values tested are power or magnitudes, all positive, so (A=B) and (|A|=|B|) are equivalent.
|A| vs |B|:
- Null hypothesis H0: (|A|=|B|)
Non-parametric tests only, independent, two-tailed, FDR-corrected.
Statistics: Between subjects
(|A|-|B|=0): Non-parametric tests, two-tailed, FDR-corrected.
Statistics: Between groups
[TODO]
Workflow: Current problems [TODO]
The following inconsistencies are still present in the documentation. We are actively working on these issues and will update this tutorial as soon as we found solutions.
- [Group analysis] Unconstrained sources: How to compute a Z-score?
- Zscore(A): Normalizes each orientation separately, which doesn't make much sense.
- Zscore(Norm(A)): Gets rid of the signs, forbids the option of a signed test H0:(Norm(A-B)=0)
See also the tutorial: Source estimation
- We need a way to normalize across the three orientations are the same time.
- [Group analysis] Constrained sources: How do we smooth?
- Group analysis benefits a lot from smoothing the source maps before computing statistics.
- However this requires to apply an absolute value first. How do we do?
- [Single subject] Unconstrained sources: How do compare two conditions with multiple trials?
- Norm(A)-Norm(B): Cannot detect correctly the differences
- (A-B): We test individually each orientation, which doesn't make much sense.
- We would need a test for the three orientations at once.
- [Group analysis] Rectify source maps?
- Recommended in Dimitrios' guidelines, which is incoherent with the rest of the page.