8784
Comment:
|
← Revision 148 as of 2023-06-05 18:42:05 ⇥
23558
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Tutorial 27: Group analysis = ''Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet'' This page provides some general recommendations for your group analysis. It is not directly related with the auditory dataset, but provides guidelines that have to be considered for any MEG/EEG experiment. <<TableOfContents(2,2)>> == Reminder == '''Averaging/contrasting MEG sensor data''' * MEG channels are not aligned across subjects (or sessions) because the physical position of channels varies with respect to the head. <<BR>>As a result, '''do not contrast/average MEG channel data across subjects or sessions'''. * However, even though this is not recommended for formal analysis, it can be extremely useful for data exploration. Most of channel patterns are spatially smooth and averaging across subjects will probably highlight interesting effects, and suggest time points and sensors with experimental effects. Examples include auditory/language signals (auditory cortices align reasonably well), attention effects (parietal/occipital alpha is fairly consistent across subjects) and most other perceptional/cognitive processes. * Note for maxfilter users: A good practice is to align all within-subject data to a reference fif file (align all sessions to a reference session). This will allow direct channel comparisons within-subject. Aligning data across subjects is not recommended since it can introduce large data distortions (though sometimes it may work well). * This does not apply to EEG because it uses standard channel configurations (e.g. 10-20). '''Averaging/contrasting cortical maps''' * Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities (e.g. positive vs. negative sources). This is true even if a standard anatomy is used for reconstruction. * As a result, to average/contrast cortical maps: * '''Across subjects: Rectify the cortical maps''' (absolute values) * '''Within subject: Do not rectify the cortical maps''' == Subject-level statistics == For one unique subject, test for significant differences between two experimental conditions: * Compare the '''single trials''' corresponding to each condition. * In most cases, you '''do not need to normalize''' the data. * Use '''independent tests'''. * For help with the implications of testing the '''relative or absolute values''', see: [[Tutorials/Difference|Difference]]. '''Sensor recordings''': * Not advised for MEG with multiple runs, correct for EEG. * '''A vs B''': * Never use an absolute value for testing recordings. * '''Parametric''' or '''non-parametric''' tests, independent, two-tailed, FDR-corrected. * Correct effect size, ambiguous sign. '''Constrained source maps''' (one value per vertex): * Use the non-normalized minimum norm maps for all the trials (current density maps, no Z-score). * '''A vs B''': * Null hypothesis H0: (A=B). * '''Parametric''' or '''non-parametric''' tests, independent, two-tailed, FDR-corrected. * Correct effect size, ambiguous sign. * '''|A| vs |B|''': * Null hypothesis H0: (|A|=|B|). * '''Non-parametric''' tests only, independent, two-tailed, FDR-corrected. * Incorrect effect size, meaningful sign. '''Unconstrained source maps''' (three values per vertex): * Use the non-normalized minimum norm maps for all the trials (current density maps, no Z-score). * We need to test the '''norm '''of the three orientations instead of testing the orientations separately. * '''Norm(A) vs. Norm(B)''': * Null hypothesis H0: (|A|=|B|). * '''Non-parametric''' tests only, independent, two-tailed, FDR-corrected. * Incorrect effect size, meaningful sign. '''Time-frequency maps''': * Test the non-normalized time-frequency maps for all the trials (no Z-score or ERS/ERD). * The values tested are power or magnitudes, all positive, so (A=B) and (|A|=|B|) are equivalent. * '''|A| vs |B|''': * Null hypothesis H0: (|A|=|B|) * '''Non-parametric''' tests only, independent, two-tailed, FDR-corrected. * Correct effect size, meaningful sign. == Group-level statistics [TODO] == ==== Subject averages ==== You need first to process the data separately for each subject: 1. Compute the ''' subject-level averages''', using the '''same number of trials''' for each subject.<<BR>> Sources: Average the non-normalized minimum norm maps (current density maps, no Z-score). 1. Sources and time-frequency: '''Normalize '''the data to bring the different subjects to the same range of values (Z-score normalization with respect to a baseline - never apply an absolute value here). 1. Sources computed on individual brains: '''Project '''the individual source maps on a template (see the [[Tutorials/CoregisterSubjects|coregistration tutorial]]). Not needed if the sources were estimated directly on the template anatomy. <<BR>>Note: We evaluated the alternative order (project the sources and then normalize): it doesn't seem to be making a significant difference. It's more practical then to normalize at the subject level before projecting the sources on the template, so that we have normalized maps to look at for each subject in the database. 1. Constrained sources: '''Smooth '''spatially the sources, to make sure the brain responses are aligned. '''Problem''': This is only possible after applying an absolute value, smoothing in relative values do not make sense, as the positive and negative signals and the two sides of a sulcus would cancel out. [TODO] ==== Group statistic ==== Two group analysis scenarios are possible: * '''One condition''' recorded for multiple subjects, comparison between '''two groups of subjects''': * Files A: Averages for group of subjects #1. * Files B: Averages for group of subjects #2. * Use '''independent tests''': Exactly the same options as for the single subject (described above) * '''Two conditions''' recorded for multiple subjects, comparison across '''all subjects''': * Files A: All subjects, average for condition A. * Files B: All subjects, average for condition B. * Use '''paired tests''' (= dependent tests), special cases listed below. ==== Paired tests ==== * '''Sensor recordings''': * '''(A-B=0)''': Parametric or non-parametric tests, two-tailed, FDR-corrected. * '''Constrained source maps''' (one value per vertex): * '''(A-B=0)''': Parametric or non-parametric tests, two-tailed, FDR-corrected. * '''(|A|-|B|=0)''': Non-parametric tests, two-tailed, FDR-corrected. * '''Unconstrained source maps''' (three values per vertex): * '''(Norm(A-B)=0)''': Non-parametric tests, __'''one-tailed'''__ (non-negative statistic), FDR-corrected. * '''(Norm(A)-Norm(B)=0)''': Non-parametric tests, two-tailed, FDR-corrected. * '''Time-frequency maps''': * '''(|A|-|B|=0)''': Non-parametric tests, two-tailed, FDR-corrected. * For help with '''relative/absolute options''', read the previous tutorial: [[Tutorials/Difference|Difference]]. <<TAG(Advanced)>> == Workflow: Current problems [TODO] == The following inconsistencies are still present in the documentation. We are actively working on these issues and will update this tutorial as soon as we found solutions. * [Group analysis] Unconstrained sources: How to compute a Z-score? * Zscore(A): Normalizes each orientation separately, which doesn't make much sense. * Zscore(Norm(A)): Gets rid of the signs, forbids the option of a signed test H0:(Norm(A-B)=0) * See also the tutorial: [[http://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Z-score|Source estimation]] * We would need a way to normalize across the three orientations are the same time. * [Group analysis] Constrained sources: How do we smooth? * Group analysis benefits a lot from smoothing the source maps before computing statistics. * However this requires to apply an absolute value first. How do we do? * [Single subject] Unconstrained sources: How do compare two conditions with multiple trials? * Norm(A)-Norm(B): Cannot detect correctly the differences * (A-B): We test individually each orientation, which doesn't make much sense. * We would need a test for the three orientations at once. |
= Tutorial 27: Workflows = ''Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, John Mosher, Richard Leahy, Sylvain Baillet'' This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment. <<BR>>We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section [[http://neuroimage.usc.edu/brainstorm/Tutorials#Other_analysis_scenarios|Other analysis scenarios]] of the tutorials page. <<TableOfContents(3,2)>> == What is your question? == The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings. ==== What dimension? ==== * MEG/EEG recordings * Cortical sources * Individual anatomy or template * Constrained (one value per vertex) or unconstrained (three values per grid point) * Full cortex or regions of interests * Frequency or time-frequency maps ==== What kind of experiment? ==== * '''Single subject''': Contrast two experimental conditions across trials, for one single subject. * Files A: Single trials for condition A. * Files B: Single trials for condition B. * '''Group analysis, within subject''': Contrast two conditions A and B measured for each subject. * Files A: Subject-level averages for condition A (all the subjects). * Files B: Subject-level averages for condition B (all the subjects). * '''Group analysis, between subjects''': Contrast two groups of subjects for one condition. * Files A: Subject-level averages for group #1 (G1). * Files B: Subject-level averages for group #2 (G2). ==== What level of precision? ==== * Difference of averages * Statistically significant differences between conditions or groups ==== What statistical test? ==== * '''A = B''' * Tests the null hypothesis H0:(A=B) against the alternative hypothesis H1:(A<<HTML(≠)>>B) * Correct detection: Identify correctly '''where and when''' the conditions are different. * Ambiguous sign: We cannot say which condition is stronger. * '''Power(A) = Power(B)''' * Tests the null hypothesis H0:(Power(A)=Power(B)) against the alternative hypothesis H1:(Power(A)<<HTML(≠)>>Power(B)) * Incorrect detection: Not sensitive to the cases where A and B have opposite signs. * Meaningful sign: We can identify correctly which condition has a '''stronger response'''. * Power(x) = |x|<<HTML(<SUP>2</SUP>)>>, where |x| represents the modulus of the values: <<BR>> - Absolute value for scalar values (recordings, constrained sources, time-frequency) <<BR>> - Norm of the three orientations for unconstrained sources. * '''Multiple comparisons''': FDR is a good choice for correcting p-values for multiple comparisons. ==== Design considerations ==== * Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data at the subject level before comparing data between subjects. * Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject. == Common pre-processing pipeline == Most event-related studies can start with the pipeline we've introduced in these tutorials. 1. Import the anatomy of the subject (or use a template for all the subjects). 1. Access the recordings: * Link the continuous recordings to the Brainstorm database. * Prepare the channel file: co-register sensors and MRI, edit type and name of channels. * Edit the event markers: fix the delays of the triggers, mark additional events. 1. Pre-process the signals: * Evaluate the quality of the recordings with a power spectral density plot (PSD). * Apply frequency filters (low-pass, high-pass, notch). * Identify bad channels and bad segments. * Correct for artifacts with SSP or ICA. 1. Import the recordings in the database: epochs around some markers of interest. ==== How many trials to include? ==== * '''Single subject''': Include all the good trials (unless you have a very low number of trials). <<BR>>See the [[http://neuroimage.usc.edu/brainstorm/Tutorials/Averaging#Number_of_trials|averaging tutorial]]. * '''Group analysis''': Use a similar numbers of trials for all the subjects (no need to be strictly equal), reject the subjects for which you have much less good trials. == EEG recordings == === Average === * Average the epochs across acquisition runs: OK. * Average the epochs across subjects: OK. * Electrodes are in the same standard positions for all the subjects (e.g. 10-20). * Never use an absolute value for averaging or contrasting sensor-level data. === Statistics: Single subject === * '''A ='''''' B''': Parametric or non-parametric t-test, '''independent''', two-tailed. === Statistics: Group analysis, within subject === * '''A ='''''' B''' * '''First-level statistic''': For each subject, sensor average for conditions A and B. * '''Second-level statistic''': Parametric or non-parametric t-test, '''paired''', two-tailed. === Statistics: Group analysis, between subjects === * '''G1 ='''''' G2''' * '''First-level statistic''': For each subject, sensor average for the conditions to test. * '''Second-level statistic''': Parametric/non-parametric t-test, '''independent''', two-tailed. == MEG recordings == === Average === * Average the epochs within each acquisition runs: OK. * Average across runs: Not advised because the head of the subject may move between runs. * Average across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects. * Tolerance for data exploration: Averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis. * Note for Elekta/MaxFilter users: You can align all acquisition run to a reference run, this will allow direct channel comparisons and averaging across runs. Not recommended across subjects. * Never use an absolute value for averaging or contrasting sensor-level data. === Statistics: Single subject === * '''A ='''''' B''': Parametric or non-parametric t-test, '''independent''', two-tailed. === Statistics: Group analysis === * Not recommended with MEG recordings: do your analysis in source space. == Constrained cortical sources == === Average: Single subject === 1. '''Sensor average''': Compute one sensor-level average''' '''per acquisition run and per condition. 1. '''Sources''': Estimate sources for each average (constrained, no normalization). 1. '''Source average''': Average the source-level run averages to get one subject average.<<BR>>Compute a weighted average to balance for different numbers of trials across runs. 1. '''Low-pass filter''' your evoked responses (optional). * If you filter the average before normalizing wrt baseline, it will lead to an underestimation of the baseline variance, and therefore to an overestimation of the Z scores computed in the next step, especially if the baseline is too short (typically less than 200 time points). The filter increases the autocorrelation of the time series, and therefore biases the estimation of the signal variance ([[https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation#Effect_of_autocorrelation_.28serial_correlation.29|Wikipedia]]). * You have to take into account the possible [[Tutorials/ArtifactsFilter|edge effects]] due to the filter. You can either extract a small time window or exclude the beginning of the baseline for the normalization. 1. '''Normalize '''the subject min-norm averages: Z-score wrt baseline (no absolute value).<<BR>>Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values. 1. '''Do not rectify the cortical maps''', but display them as absolute values if needed. === Average: Group analysis === 1. '''Subject averages''': Compute within-subject averages for all the subjects, as described above. 1. '''Rectify''' the cortical maps (process: Pre-process > Absolute value). <<BR>>Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction. 1. '''Project '''the individual source maps on a template (only when using the individual brains). <<BR>> For more details, see tutorial: [[Tutorials/CoregisterSubjects|Group analysis: Subject coregistration]]. 1. '''Group average''': Compute grand averages of all the subjects.<<BR>>Do __not__ use a weighted average: all the subjects should have the same weight in this average. 1. '''Smooth '''spatially the source maps (optional).<<BR>>You can smooth after step #3 for computing non-parametric statistics with the subject averages. For a simple group average, it is equivalent to smooth before of after computing the average. === Difference of averages: Within subject === 1. '''Sensor average''': Compute one sensor-level average per acquisition run and condition. 1. '''Sources''': Estimate sources for each average (constrained, no normalization). 1. '''Source average''': Average the source-level session averages to get one subject average. 1. '''Subject difference''': Compute the difference between conditions for each subject #i: (A,,i,,-B,,i,,) 1. '''Low-pass filter''' the difference (optional) 1. '''Normalize '''the difference: Z-score wrt baseline (no absolute value): Z(A,,i,,-B,,i,,) 1. '''Rectify''' the difference (apply an absolute value): |Z(A,,i,,-B,,i,,)| 1. '''Project '''the individual difference on a template (only when using the individual brains). 1. '''Group average''': Compute grand averages of all the subjects: avg(|Z(A,,i,,-B,,i,,)|). 1. '''Smooth '''spatially the source maps (optional). === Difference of averages: Between subjects === 1. '''Grand averages''': Compute averages for groups #1 and #2 as in ''Average:Group analysis.'' 1. '''Difference''': Compute the difference between group-level averages: avg(|G,,1,,|)-avg(|G,,2,,|) 1. '''Limitations''': Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly. === Statistics: Single subject === * '''A = B''': Parametric or non-parametric * Compute source maps for each trial (constrained, no normalization). * Parametric or non-parametric two-sample t-test, independent, two-tailed.<<BR>>Identifies correctly '''where and when''' the conditions are different (sign not meaningful). * '''Directionality''': Additional step to know which condition has higher values.<<BR>>Compute the difference of rectified averages: |avg(A,,i,,)|-|avg(B,,i,,)|<<BR>>Combine the significance level (t-test) with the direction (difference): [[http://neuroimage.usc.edu/brainstorm/Tutorials/Statistics#Directionality:_Difference_of_absolute_values|See details]]. * '''|mean(A)| = |mean(B)|''': Non-parametric * Compute source maps for each trial (constrained, no normalization). * Non-parametric independent two-sample "absolute mean test", two-tailed.<<BR>>T = (|mean(A)|-|mean(B)|) / sqrt(|var(A)|/N,,A,, + |var(B)|/N,,B,,) * Interesting alternative that provides at the time a correct estimation of the difference (where and when) and the direction (which condition has higher values). === Statistics: Group analysis, within subject === * '''Power(A-B) = 0''': Parametric * '''First-level statistic''': Rectified difference of normalized averages. <<BR>>Proceed as in ''Difference of averages: Within subjects'', but stop before the group average (after step #8). You obtain one measure '''|A,,i,,-B,,i,,|''' per subject, test these values against zero. * '''Second-level statistic''': Parametric one-sample Chi^2^-test. <<BR>>Power = sum(|A,,i,,-B,,i,,|^2^), i=1..N,,subj,, ~ Chi^2^(N,,subj,,) * Identifies '''where and when''' the conditions are different (sign not meaningful). * Warning: Very sensitive test, with lots of false positive (all the brain can be "significant") * '''|A| = |B|''': Parametric or non-parametric * '''First-level statistic''': Rectified and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain two averages per subject: A,,i,, and B,,i,,. * '''Second-level statistic''': Parametric or non-parametric two-sample t-test, paired, two-tailed. * This test does not consider the sign difference within a subject, and therefore cannot detect correctly when A and B have opposite signs. Works well and indicates '''which condition has higher values''' when A and B have the same sign within a subject. * '''A = B''': Parametric or non-parametric [anatomy template only] * '''First-level statistic''': Normalized subject averages (not rectified, no projection needed). <<BR>>Proceed as in ''Average: Single subject'' to obtain two averages per subject: A,,i,, and B,,i,,. * '''Second-level statistic''': Parametric or non-parametric two-sample t-test, paired, two-tailed. * Applies only if all the subjects are sharing the same template anatomy. <<BR>>Not recommended when using individual anatomies because of the sign issue between subjects (the signs might be opposed between two subjects, and the projection of non-rectified values to a template might be inaccurate). * '''Power(A) = 0''': Parametric * '''First-level statistic''': Rectified and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain one average per subject #i: |A,,i,,|. * '''Second-level statistic''': Parametric one-sample Chi^2^-test. <<BR>>PowerA = sum(|A,,i,,|^2^), i=1..N,,subj,, ~ Chi^2^(N,,subj,,). === Statistics: Group analysis, between subjects === * '''|G1| = |G2|''': Non-parametric * '''First-level statistic''': Rectified and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain one average per subject. * '''Second-level statistic''': Non-parametric two-sample t-test, '''independent''', two-tailed. * '''Power(G1) = Power(G2)''': Parametric * '''First-level statistic''': Rectified and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain one average per subject: |Ai|. * '''Second-level statistic''': Parametric two-sample power F-test. <<BR>>PowerG1 = sum(A,,i,,^2^), i=1..N,,1,, ~ Chi^2^(N,,1,,)<<BR>>PowerG2 = sum(A,,j,,^2^), j=1..N,,2,, ~ Chi^2^(N,,2,,)<<BR>>F(N,,1,,,N,,2,,) = (PowerG1 / N,,1,,) / (PowerG2 / N,,2,,) == Unconstrained cortical sources == Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z). <<BR>>We want only one statistic and one p-value per grid point in output. === Averages === * Proceed as indicated above for constrained cortical sources. * Just replace the step '''Rectify''' with '''Flatten''' (process: Sources > [[https://neuroimage.usc.edu/brainstorm/Tutorials/PCA#Unconstrained_source_flattening_with_PCA|Unconstrained to flat map]]). * The operator |A| has to be interpreted as "norm of the three orientations":<<BR>> |A| = sqrt(A,,x,,^2^ + A,,y,,^2^ + A,,z,,^2^) === Statistics: Single subject === * '''|mean(A)| = |mean(B)|''': Non-parametric * Compute source maps for each trial (unconstrained, no normalization). * Non-parametric independent two-sample "absolute mean test", independent, two-tailed.<<BR>>T = (|mean(A)|-|mean(B)|) / sqrt(|var(A)|/N,,A,, + |var(B)|/N,,B,,) * Provides at the time a correct estimation of the difference (where and when) and the direction (which condition has higher values). === Statistics: Group analysis, within subject === * '''Power(A-B) = 0''': Parametric * '''First-level statistic''': Flattened difference of normalized averages. <<BR>>Proceed as in ''Difference of averages: Within subjects'', but stop before the group average (after step #8). You obtain one measure '''|A,,i,,-B,,i,,|''' per subject, test these values against zero. * '''Second-level statistic''': Parametric one-sample Chi^2^-test for unconstrained sources. <<BR>>Power = sum(|A,,i,,-B,,i,,|^2^), i=1..N,,subj,, ~ Chi^2^(3*N,,subj,,) * Identifies '''where and when''' the conditions are different (sign not meaningful). * Warning: Very sensitive test, with lots of false positive (all the brain can be "significant") * '''|A| = |B|''': Parametric or non-parametric * '''First-level statistic''': Flattened and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain two averages per subject: A,,i,, and B,,i,,. * '''Second-level statistic''': Parametric or non-parametric two-sample t-test, paired, two-tailed. * This test does not consider the sign difference within a subject, and therefore cannot detect correctly when A and B have opposite signs. Works well and indicates '''which condition has higher values''' when A and B have the same sign within a subject. * '''Power(A) = 0''': Parametric * '''First-level statistic''': Flattened and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain one average per subject #i: |A,,i,,|. * '''Second-level statistic''': Parametric one-sample Chi^2^-test for unconstrained sources. <<BR>>PowerA = sum(|A,,i,,|^2^) = sum(A,,ix,,^2^+A,,iy,,^2^+A,,iz,,^2^), i=1..N,,subj,, ~ Chi^2^(3*N,,subj,,). === Statistics: Group analysis, between subjects === * '''|G1| = |G2|''': Non-parametric * '''First-level statistic''': Flattened and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain one average per subject. * '''Second-level statistic''': Non-parametric two-sample t-test, '''independent''', two-tailed. * '''Power(G1) = Power(G2)''': Parametric * '''First-level statistic''': Flattened and normalized subject averages. <<BR>>Proceed as in ''Average: Group analysis'' to obtain one average per subject: |Ai|. * '''Second-level statistic''': Parametric two-sample power F-test (unconstrained sources). <<BR>>PowerG1 = sum(A,,ix,,^2^+A,,iy,,^2^+A,,iz,,^2^), i=1..N,,1,, ~ Chi^2^(3*N,,1,,)<<BR>>PowerG2 = sum(A,,jx,,^2^+A,,jy,,^2^+A,,jz,,^2^), j=1..N,,2,, ~ Chi^2^(3*N,,2,,)<<BR>>F = (PowerG1 / N,,1,,) / (PowerG2 / N,,2,,) ~ F(3*N,,1,,,3*N,,2,,) == Regions of interest (scouts) == === Statistics: Single subject === * Even within-subject cortical maps have sign ambiguities. MEG/EEG source mapping has limited spatial resolution and sources in opposing sulcal/gyral areas are typically reconstructed with inverted signs (constrained orientations only). Averaging activity in cortical regions of interest (scouts) would thus lead to signal cancelation. To mitigate this issue, Brainstorm uses algorithms to change the polarity of individual source time series before averaging across a scout cortical region. Please note however that the polarity of the resulting time series remains ambiguous, as it is determined by statistical considerations, not by physiological information. * We therefore recommend that you '''perform subject-wise averaging/contrast before averaging scout time series'''. You may also consider aggregating scout source time series using [[https://neuroimage.usc.edu/brainstorm/Tutorials/PCA|PCA]] instead of averaging, '''across all files''' (all epochs and conditions) from a subject together, ensuring the same component is applied everywhere and therefore mitigating further some of the ambiguities concerning the polarity of the source time series. === Statistics: Group analysis, within subject === * Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases. * Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns. == Time-frequency maps == === Average: Single subject === * '''Single trials''': Compute time-frequency maps for each trial ('''magnitude''', no normalization).<<BR>>It is a more standard analysis to take the square root of power before a t-test. * '''Subject average''': Average the time-frequency maps together, separately for each condition. This can be done automatically when computing the TF decompositions (option "Save averaged time-frequency maps" in the process options). * The values are all strictly positive, there is no sign ambiguity: you can directly subtract the averages of the two conditions and interpret the sign of the difference. * If you average time-frequency maps computed on sensor-level data, the same limitations apply as for averaging sensor level data (see sections about MEG and EEG recordings above). === Average: Group analysis === 1. '''Subject averages''': Compute within-subject averages for all the subjects, as described above. 1. '''Normalize''' the subject averages: ERD/ERS or Z-score wrt baseline. 1. '''Group average''': Compute grand averages of all the subjects. 1. '''Difference of averages''': Simply compute the difference of the group averages. === Statistics: Single subject === * '''A = B''': Parametric or non-parametric * Compute time-frequency maps for each trial ('''magnitude''', no normalization). * Parametric or non-parametric two-sample t-test, independent, two-tailed. === Statistics: Group analysis, within subject === * '''A = B''': Parametric or non-parametric [anatomy template only] * '''First-level statistic''': Normalized subject averages (ERS/ERD or Z-score). <<BR>> Proceed as in ''Average: Group analysis'' to obtain two averages per subject: A,,i,, and B,,i,,. * '''Second-level statistic''': Parametric or non-parametric two-sample t-test, paired, two-tailed. === Statistics: Group analysis, between subjects === * '''G1 = G2''': Non-parametric * '''First-level statistic''': Normalized subject averages (ERS/ERD or Z-score). <<BR>> Proceed as in ''Average: Group analysis'' to obtain one average per subject. * '''Second-level statistic''': Parametric or non-parametric two-sample t-test, independent, two-tailed. |
Line 128: | Line 261: |
<<EmbedContent("http://neuroimage.usc.edu/bst/get_prevnext.php?prev=Tutorials/Statistics&next=Tutorials/Connectivity")>> <<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/GroupAnalysis)>> |
<<EmbedContent("http://neuroimage.usc.edu/bst/get_prevnext.php?prev=Tutorials/Statistics&next=Tutorials/Scripting")>> <<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/Workflows)>> |
Tutorial 27: Workflows
Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, John Mosher, Richard Leahy, Sylvain Baillet
This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment.
We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.
Contents
What is your question?
The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings.
What dimension?
- MEG/EEG recordings
- Cortical sources
- Individual anatomy or template
- Constrained (one value per vertex) or unconstrained (three values per grid point)
- Full cortex or regions of interests
- Frequency or time-frequency maps
What kind of experiment?
Single subject: Contrast two experimental conditions across trials, for one single subject.
- Files A: Single trials for condition A.
- Files B: Single trials for condition B.
Group analysis, within subject: Contrast two conditions A and B measured for each subject.
- Files A: Subject-level averages for condition A (all the subjects).
- Files B: Subject-level averages for condition B (all the subjects).
Group analysis, between subjects: Contrast two groups of subjects for one condition.
- Files A: Subject-level averages for group #1 (G1).
- Files B: Subject-level averages for group #2 (G2).
What level of precision?
- Difference of averages
- Statistically significant differences between conditions or groups
What statistical test?
A = B
Tests the null hypothesis H0:(A=B) against the alternative hypothesis H1:(A≠B)
Correct detection: Identify correctly where and when the conditions are different.
- Ambiguous sign: We cannot say which condition is stronger.
Power(A) = Power(B)
Tests the null hypothesis H0:(Power(A)=Power(B)) against the alternative hypothesis H1:(Power(A)≠Power(B))
- Incorrect detection: Not sensitive to the cases where A and B have opposite signs.
Meaningful sign: We can identify correctly which condition has a stronger response.
Power(x) = |x|2, where |x| represents the modulus of the values:
- Absolute value for scalar values (recordings, constrained sources, time-frequency)
- Norm of the three orientations for unconstrained sources.
Multiple comparisons: FDR is a good choice for correcting p-values for multiple comparisons.
Design considerations
- Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data at the subject level before comparing data between subjects.
- Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject.
Common pre-processing pipeline
Most event-related studies can start with the pipeline we've introduced in these tutorials.
- Import the anatomy of the subject (or use a template for all the subjects).
- Access the recordings:
- Link the continuous recordings to the Brainstorm database.
- Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
- Edit the event markers: fix the delays of the triggers, mark additional events.
- Pre-process the signals:
- Evaluate the quality of the recordings with a power spectral density plot (PSD).
- Apply frequency filters (low-pass, high-pass, notch).
- Identify bad channels and bad segments.
- Correct for artifacts with SSP or ICA.
- Import the recordings in the database: epochs around some markers of interest.
How many trials to include?
Single subject: Include all the good trials (unless you have a very low number of trials).
See the averaging tutorial.Group analysis: Use a similar numbers of trials for all the subjects (no need to be strictly equal), reject the subjects for which you have much less good trials.
EEG recordings
Average
- Average the epochs across acquisition runs: OK.
- Average the epochs across subjects: OK.
- Electrodes are in the same standard positions for all the subjects (e.g. 10-20).
- Never use an absolute value for averaging or contrasting sensor-level data.
Statistics: Single subject
A = B: Parametric or non-parametric t-test, independent, two-tailed.
Statistics: Group analysis, within subject
A = B
First-level statistic: For each subject, sensor average for conditions A and B.
Second-level statistic: Parametric or non-parametric t-test, paired, two-tailed.
Statistics: Group analysis, between subjects
G1 = G2
First-level statistic: For each subject, sensor average for the conditions to test.
Second-level statistic: Parametric/non-parametric t-test, independent, two-tailed.
MEG recordings
Average
- Average the epochs within each acquisition runs: OK.
- Average across runs: Not advised because the head of the subject may move between runs.
- Average across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects.
- Tolerance for data exploration: Averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis.
- Note for Elekta/MaxFilter users: You can align all acquisition run to a reference run, this will allow direct channel comparisons and averaging across runs. Not recommended across subjects.
- Never use an absolute value for averaging or contrasting sensor-level data.
Statistics: Single subject
A = B: Parametric or non-parametric t-test, independent, two-tailed.
Statistics: Group analysis
- Not recommended with MEG recordings: do your analysis in source space.
Constrained cortical sources
Average: Single subject
Sensor average: Compute one sensor-level average per acquisition run and per condition.
Sources: Estimate sources for each average (constrained, no normalization).
Source average: Average the source-level run averages to get one subject average.
Compute a weighted average to balance for different numbers of trials across runs.Low-pass filter your evoked responses (optional).
If you filter the average before normalizing wrt baseline, it will lead to an underestimation of the baseline variance, and therefore to an overestimation of the Z scores computed in the next step, especially if the baseline is too short (typically less than 200 time points). The filter increases the autocorrelation of the time series, and therefore biases the estimation of the signal variance (Wikipedia).
You have to take into account the possible edge effects due to the filter. You can either extract a small time window or exclude the beginning of the baseline for the normalization.
Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.Do not rectify the cortical maps, but display them as absolute values if needed.
Average: Group analysis
Subject averages: Compute within-subject averages for all the subjects, as described above.
Rectify the cortical maps (process: Pre-process > Absolute value).
Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.Project the individual source maps on a template (only when using the individual brains).
For more details, see tutorial: Group analysis: Subject coregistration.Group average: Compute grand averages of all the subjects.
Do not use a weighted average: all the subjects should have the same weight in this average.Smooth spatially the source maps (optional).
You can smooth after step #3 for computing non-parametric statistics with the subject averages. For a simple group average, it is equivalent to smooth before of after computing the average.
Difference of averages: Within subject
Sensor average: Compute one sensor-level average per acquisition run and condition.
Sources: Estimate sources for each average (constrained, no normalization).
Source average: Average the source-level session averages to get one subject average.
Subject difference: Compute the difference between conditions for each subject #i: (Ai-Bi)
Low-pass filter the difference (optional)
Normalize the difference: Z-score wrt baseline (no absolute value): Z(Ai-Bi)
Rectify the difference (apply an absolute value): |Z(Ai-Bi)|
Project the individual difference on a template (only when using the individual brains).
Group average: Compute grand averages of all the subjects: avg(|Z(Ai-Bi)|).
Smooth spatially the source maps (optional).
Difference of averages: Between subjects
Grand averages: Compute averages for groups #1 and #2 as in Average:Group analysis.
Difference: Compute the difference between group-level averages: avg(|G1|)-avg(|G2|)
Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.
Statistics: Single subject
A = B: Parametric or non-parametric
- Compute source maps for each trial (constrained, no normalization).
Parametric or non-parametric two-sample t-test, independent, two-tailed.
Identifies correctly where and when the conditions are different (sign not meaningful).Directionality: Additional step to know which condition has higher values.
Compute the difference of rectified averages: |avg(Ai)|-|avg(Bi)|
Combine the significance level (t-test) with the direction (difference): See details.
|mean(A)| = |mean(B)|: Non-parametric
- Compute source maps for each trial (constrained, no normalization).
Non-parametric independent two-sample "absolute mean test", two-tailed.
T = (|mean(A)|-|mean(B)|) / sqrt(|var(A)|/NA + |var(B)|/NB)- Interesting alternative that provides at the time a correct estimation of the difference (where and when) and the direction (which condition has higher values).
Statistics: Group analysis, within subject
Power(A-B) = 0: Parametric
First-level statistic: Rectified difference of normalized averages.
Proceed as in Difference of averages: Within subjects, but stop before the group average (after step #8). You obtain one measure |Ai-Bi| per subject, test these values against zero.Second-level statistic: Parametric one-sample Chi2-test.
Power = sum(|Ai-Bi|2), i=1..Nsubj ~ Chi2(Nsubj)Identifies where and when the conditions are different (sign not meaningful).
- Warning: Very sensitive test, with lots of false positive (all the brain can be "significant")
|A| = |B|: Parametric or non-parametric
First-level statistic: Rectified and normalized subject averages.
Proceed as in Average: Group analysis to obtain two averages per subject: Ai and Bi.Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.
This test does not consider the sign difference within a subject, and therefore cannot detect correctly when A and B have opposite signs. Works well and indicates which condition has higher values when A and B have the same sign within a subject.
A = B: Parametric or non-parametric [anatomy template only]
First-level statistic: Normalized subject averages (not rectified, no projection needed).
Proceed as in Average: Single subject to obtain two averages per subject: Ai and Bi.Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.
Applies only if all the subjects are sharing the same template anatomy.
Not recommended when using individual anatomies because of the sign issue between subjects (the signs might be opposed between two subjects, and the projection of non-rectified values to a template might be inaccurate).
Power(A) = 0: Parametric
First-level statistic: Rectified and normalized subject averages.
Proceed as in Average: Group analysis to obtain one average per subject #i: |Ai|.Second-level statistic: Parametric one-sample Chi2-test.
PowerA = sum(|Ai|2), i=1..Nsubj ~ Chi2(Nsubj).
Statistics: Group analysis, between subjects
|G1| = |G2|: Non-parametric
First-level statistic: Rectified and normalized subject averages.
Proceed as in Average: Group analysis to obtain one average per subject.Second-level statistic: Non-parametric two-sample t-test, independent, two-tailed.
Power(G1) = Power(G2): Parametric
First-level statistic: Rectified and normalized subject averages.
Proceed as in Average: Group analysis to obtain one average per subject: |Ai|.Second-level statistic: Parametric two-sample power F-test.
PowerG1 = sum(Ai2), i=1..N1 ~ Chi2(N1)
PowerG2 = sum(Aj2), j=1..N2 ~ Chi2(N2)
F(N1,N2) = (PowerG1 / N1) / (PowerG2 / N2)
Unconstrained cortical sources
Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z).
We want only one statistic and one p-value per grid point in output.
Averages
- Proceed as indicated above for constrained cortical sources.
Just replace the step Rectify with Flatten (process: Sources > Unconstrained to flat map).
The operator |A| has to be interpreted as "norm of the three orientations":
|A| = sqrt(Ax2 + Ay2 + Az2)
Statistics: Single subject
|mean(A)| = |mean(B)|: Non-parametric
- Compute source maps for each trial (unconstrained, no normalization).
Non-parametric independent two-sample "absolute mean test", independent, two-tailed.
T = (|mean(A)|-|mean(B)|) / sqrt(|var(A)|/NA + |var(B)|/NB)- Provides at the time a correct estimation of the difference (where and when) and the direction (which condition has higher values).
Statistics: Group analysis, within subject
Power(A-B) = 0: Parametric
First-level statistic: Flattened difference of normalized averages.
Proceed as in Difference of averages: Within subjects, but stop before the group average (after step #8). You obtain one measure |Ai-Bi| per subject, test these values against zero.Second-level statistic: Parametric one-sample Chi2-test for unconstrained sources.
Power = sum(|Ai-Bi|2), i=1..Nsubj ~ Chi2(3*Nsubj)Identifies where and when the conditions are different (sign not meaningful).
- Warning: Very sensitive test, with lots of false positive (all the brain can be "significant")
|A| = |B|: Parametric or non-parametric
First-level statistic: Flattened and normalized subject averages.
Proceed as in Average: Group analysis to obtain two averages per subject: Ai and Bi.Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.
This test does not consider the sign difference within a subject, and therefore cannot detect correctly when A and B have opposite signs. Works well and indicates which condition has higher values when A and B have the same sign within a subject.
Power(A) = 0: Parametric
First-level statistic: Flattened and normalized subject averages.
Proceed as in Average: Group analysis to obtain one average per subject #i: |Ai|.Second-level statistic: Parametric one-sample Chi2-test for unconstrained sources.
PowerA = sum(|Ai|2) = sum(Aix2+Aiy2+Aiz2), i=1..Nsubj ~ Chi2(3*Nsubj).
Statistics: Group analysis, between subjects
|G1| = |G2|: Non-parametric
First-level statistic: Flattened and normalized subject averages.
Proceed as in Average: Group analysis to obtain one average per subject.Second-level statistic: Non-parametric two-sample t-test, independent, two-tailed.
Power(G1) = Power(G2): Parametric
First-level statistic: Flattened and normalized subject averages.
Proceed as in Average: Group analysis to obtain one average per subject: |Ai|.Second-level statistic: Parametric two-sample power F-test (unconstrained sources).
PowerG1 = sum(Aix2+Aiy2+Aiz2), i=1..N1 ~ Chi2(3*N1)
PowerG2 = sum(Ajx2+Ajy2+Ajz2), j=1..N2 ~ Chi2(3*N2)
F = (PowerG1 / N1) / (PowerG2 / N2) ~ F(3*N1,3*N2)
Regions of interest (scouts)
Statistics: Single subject
- Even within-subject cortical maps have sign ambiguities. MEG/EEG source mapping has limited spatial resolution and sources in opposing sulcal/gyral areas are typically reconstructed with inverted signs (constrained orientations only). Averaging activity in cortical regions of interest (scouts) would thus lead to signal cancelation. To mitigate this issue, Brainstorm uses algorithms to change the polarity of individual source time series before averaging across a scout cortical region. Please note however that the polarity of the resulting time series remains ambiguous, as it is determined by statistical considerations, not by physiological information.
We therefore recommend that you perform subject-wise averaging/contrast before averaging scout time series. You may also consider aggregating scout source time series using PCA instead of averaging, across all files (all epochs and conditions) from a subject together, ensuring the same component is applied everywhere and therefore mitigating further some of the ambiguities concerning the polarity of the source time series.
Statistics: Group analysis, within subject
- Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases.
- Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns.
Time-frequency maps
Average: Single subject
Single trials: Compute time-frequency maps for each trial (magnitude, no normalization).
It is a more standard analysis to take the square root of power before a t-test.Subject average: Average the time-frequency maps together, separately for each condition. This can be done automatically when computing the TF decompositions (option "Save averaged time-frequency maps" in the process options).
- The values are all strictly positive, there is no sign ambiguity: you can directly subtract the averages of the two conditions and interpret the sign of the difference.
- If you average time-frequency maps computed on sensor-level data, the same limitations apply as for averaging sensor level data (see sections about MEG and EEG recordings above).
Average: Group analysis
Subject averages: Compute within-subject averages for all the subjects, as described above.
Normalize the subject averages: ERD/ERS or Z-score wrt baseline.
Group average: Compute grand averages of all the subjects.
Difference of averages: Simply compute the difference of the group averages.
Statistics: Single subject
A = B: Parametric or non-parametric
Compute time-frequency maps for each trial (magnitude, no normalization).
- Parametric or non-parametric two-sample t-test, independent, two-tailed.
Statistics: Group analysis, within subject
A = B: Parametric or non-parametric [anatomy template only]
First-level statistic: Normalized subject averages (ERS/ERD or Z-score).
Proceed as in Average: Group analysis to obtain two averages per subject: Ai and Bi.Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.
Statistics: Group analysis, between subjects
G1 = G2: Non-parametric
First-level statistic: Normalized subject averages (ERS/ERD or Z-score).
Proceed as in Average: Group analysis to obtain one average per subject.Second-level statistic: Parametric or non-parametric two-sample t-test, independent, two-tailed.