Hi
I'm trying to replicate an analysis from another study. In this study they compared two groups using a combination of the Monte Carlo test and non-parametric bootstrapping running 4,000 permutations. Permutations were calculated following a component-free approach across the entire array of electrodes for every millisecond. Electrodes with significant results (p < 0.01) were placed into regions of interest (ROIs), and the activity within such regions was averaged out. They considered six ROIs and assigned fourteen electrodes to each ROI. Then, they compared the average activity from the six ROIs using 4,000 bootstrapping permutations (p < 0.05).
I'm unsure how to run this analysis in Brainstorm. I can run "Test > Permutation test: Independent" and set the number of Monte-Carlo randomisations to 4000 but I'm not sure how to add the bootstrapping component? I understand this is required to account for the multiple comparison problems?
Best
David
Hi David,
While Brainstorm probably has the tools to do the analysis you need (averaging electrodes, permutations, etc), your message describes a very convoluted analysis so it is difficult to provide useful comments. One important item is that bootstrap and permutation are two different resampling procedures (bootstrap is resampling with replacement, permutation is resampling without replacement). But David you appear to use them interchangeably or together in you message (e.g. 'bootstrapping permutations'), so I am not sure how we can help.
Best,
Dimitrios
Hi Dimitrios,
Thank you for your reply. Yes I wasn't sure how to replicate this statistical analysis. What you say about permutations and bootstrapping being both resampling techniques makes sense. I add an edited copy of the original text describing the statistical analysis from the paper with analysis I'm trying to replicate. I initially thought it involved running the permutation test in Brainstorm and then select the significant sensors (p < 0.01) within each ROI and average those in another permutation test. But it seems they used both permutations and bootstrapping? They describe using both permutations and bootstrapping which seems to remove the need for multiple comparison correction. I can try asking the authors for more information.
First, to identify significant between-group differences, we used a combination of the Monte Carlo test and non-parametric bootstrapping running 4,000 permutations. The data were later analyzed by applying 4,000 permutation draws to generate a histogram called the Monte-Carlo approximation of the permutation distribution. To calculate the differences between our data and this distribution, we used the Monte-Carlo estimation of the permutation p-value, which is the proportion of random partitions in which the observed test statistic is larger than the value drawn from the permutation distribution. If this p-value is smaller than the critical alpha-level, then it is concluded that the data between the two groups are significantly different. This method offers a straightforward solution for multiple comparison problems and does not depend on multiple comparisons correction or Gaussian assumptions about the probability distribution of the data. Permutations were calculated following a component-free approach across the entire array of electrodes for every millisecond. Electrodes with significant results (p < 0.01) were placed into regions of interest (ROIs), and the activity within such regions was averaged out. For each ROI we assigned fourteen electrodes. Then, we compared the average activity from the six ROIs using 4,000 bootstrapping permutations (p < 0.05). Such contrasts were independently carried out in three time-windows (early: 100–250 ms; intermediate: 250–500 ms; late: 500–900 ms).
Hi David,
The paragraph you shared appears tightly connected with the rest of the analyses described in that paper, so you should try contacting the authors to find out more about their specific procedures. I am sorry I cannot help more at this point.
Best,
Dimitrios
Hi Dimitrios,
I contacted the author and he kindly sent the following information. In the script they set it (mode) as 'bootstrap' - 'bootstrap' = non-parametric bootstrap made by permuting the input data {default: 'param'}.
"To do the contrasts we use the matlab's function "statcond.m" (which I send as an attachment). In this function we used as optional inputs the parameters "mode", "paired" and "naccu" and set them as "bootstrap", "on" and "4000", respectively. I understand that permutation test and bootstrap are two forms of resampling, but I think that the implemented terminology was based on the help explanation of the statcond.m function.
example: [t df pvals] = statcond(DATA, 'mode', 'bootstrap', 'paired', 'on', 'naccu', 4000);"
He also sent the statcond.m script which is used to "compare two or more data conditions statistically using standard parametric or nonparametric permutation-based ANOVA (1-way or 2-way) or t-test methods". I'm not sure how to adapt it to Brainstorm for the purposes of my analysis. Would you be able to help?
Best regards,
David
Unfortunately we won't be able to help you adapting custom scripts to your data. The purpose of this forum is to provide help and support for the methods we develop and distribute through Brainstorm. For other methods, the only thing we can do to help you is to explain how to export the data of interest out of Brainstorm so you can work on it with your personal Matlab scripts (or other languages and environments).
Data structures are all documented in the introduction tutorials, with a summary in the Scripting tutorial, which you might find useful reading before starting coding any custom function:
https://neuroimage.usc.edu/brainstorm/Tutorials/Scripting#File_structures
Please let us know there are structures or types of data you can't find or you don't know how to use.