Source Statistics - Warning: Cannot determine t-test side

Dear Brainstorm community,

I was hoping to calculate source differences for some ERPs in a pre-post (within subject) design, in which each test session had two conditions I would like to compare is a pre-post fashion.

After trying my best to follow the tutorials, workflow, and some guidance from previous posts here on the discussion forum (e.g. Statistics on sources), here is my general pipeline that I ended up going with:

  • used the default anatomy for each subject
  • EEG data was already preprocessed prior, so I simply imported the .set files for each subject, separating each file to its appropriate condition
  • created an average for each condition within each subject
  • generated BEM surfaces with adaptive integration
  • calculated the noise covariance matrix using the individual trial file for each subject using the pre-stimulus period of the condition with the most trials (this may be wrong, but I don't think it would explain the errors I eventually run into)
  • I then estimated the sources with the unconstrained option
  • With these sources, I then applied the baseline correction of Z-score wrt baseline
  • I then applied the Flatten process (process: Sources > Unconstrained to flat map) due to having used the unconstrained option prior

At this point, I add each the '...| zscore | norm' files to process 2, the pre files to A and the post files for B, trying to compare one condition in a pre-post fashion. I then select 'parametric test:paired', limit my time window to 200 - 300 ms while leaving the rest of the options as their default, and run.

Here, I get a resulting 'stat' file titled 't-test paired', but after clicking it to see the results, I see a flat-line in regards to t-test values, and have the warning pop up: 'Warning: Cannot determine t-test side'. When running different types of analyses, I also run into 'All Values are null. Please check your input file'.

A couple of questions: a) is my pipeline correct in the sense that I should be able to do source statistics on it and b) what am I doing wrong to both receive this warning and see a flat-line of zeros/all values being null.

EDIT - just saw this post by Francois (P value of ERD analysis), is it fair to assume that my second question is simply due to a lack of significant results or is there something wrong with my preprocessing?

Any help would be greatly appreciated.

Best,
Paul

Hi Paul,

calculated the noise covariance matrix using the individual trial file for each subject using the pre-stimulus period of the condition with the most trials (this may be wrong, but I don't think it would explain the errors I eventually run into)

Why didn't you decide to compute the noise covariance using the pre-stim baselines of all the trials?

At this point, I add each the '...| zscore | norm' files to process 2, the pre files to A and the post files for B, trying to compare one condition in a pre-post fashion.

I'm not sure I understand what you mean with "pre" and "post".
For one given subject, you have two consecutive experimental conditions with the same duration and that can be compared time sample by time sample?
In a typical ERP analysis, within-subject tests between conditions are independent tests, not paired tests.
https://neuroimage.usc.edu/brainstorm/Tutorials/Statistics#Example_4:_Parametric_test_on_sources

A general advice: constrained source maps look uglier, but give similar information and are much easier to manipulate, especially for handling the statistical tests.

Here, I get a resulting 'stat' file titled 't-test paired', but after clicking it to see the results, I see a flat-line in regards to t-test values

If you see flat lines, this is a time-series figure, which indicates you are running tests at the sensor level, not at the source level? Haven't you forgotten to click on the "process sources" buttons in Process2?
Please post a few screen captures to illustrate your questions.

the warning pop up: 'Warning: Cannot determine t-test side'.

You can ignore this warning, it is related with the adjustment of the colormap to the range of significant values, to give more contrast to the images. If you don't have any significant data to display, this function doesn't work...
To see if the file you wanted to get was successfully computed, in the Stat tab, set the p-value threshold to 1, and the correction for multiple comparisons to "uncorrected". This will display the value of the T-statistic for each time sample and each space point (sensor or dipole).

When running different types of analyses, I also run into 'All Values are null. Please check your input file'.

This is because there is nothing significant in your comparison.
To explore the differences between datasets, before trying to localize them in space, you can use the machine learning tools available in Brainstorm (two new functions are not documented yet, but it will come soon):
https://neuroimage.usc.edu/brainstorm/Tutorials/Decoding

a) is my pipeline correct in the sense that I should be able to do source statistics on it

What you describe seems to match what is recommended in the online tutorial.


@pantazis @Sylvain @John_Mosher @leahy @hossein27en
This is related with this issue: Group analysis: recommended workflow and statistics · Issue #141 · brainstorm-tools/brainstorm3 · GitHub
Can you please give your point of view on these questions?

Hi Francois,

thank you very much for the insight.

  1. When I first imported my .set file, I did so in a manner in which each condition (4 in total) were separate files in Brainstorm. From my understanding, the noise covariance could only be calculated from a single file, so for each subject, I ended up choosing the imported condition with the most trials for the calculation of the noise covariance. In hindsight, I suppose it would be better to import the entire file with all the trials together (not separated into the 4 conditions), and use this file for the noise covariance calculation?

  2. Apologizes for the confusing language; I have a pre-post design in the sense that I have a group at baseline and the same group goes through a treatment, and thus has a post treatment EEG session. Thus, I'm comparing the same conditions independently in the same subjects based on their pre treatment and post treatment session. This is why I used paired tests, would that still be incorrect?

  3. Hmm I did use the process source buttons, but under the stat tab, it allows me to view the t-test values across time, and it is these values I find are flat. I will post pictures if I can't seem to fix it.

  4. Perhaps this can't be answered, but I have strong significant effects at the sensor level (using Fieldtrip and their cluster method), but don't get anything significant at the source domain (using the same cluster method as implemented in Brainstorm). Is there any reason to be concerned if one can see strong effects at the sensor level but not the source level?

Thank you again for all the help.
Paul

From my understanding, the noise covariance could only be calculated from a single file

No, you can use as many files as needed, the more the better. Select all the imported trials at same time in the database explorer before right-clicking on one of them (or use the Process1 tab). Avoid using averages to estimate this noise covariance matrix. Example:
https://neuroimage.usc.edu/brainstorm/Tutorials/MedianNerveCtf#Noise_covariance_matrix
https://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance#Variations_on_how_to_estimate_sample_noise_covariance

Thus, I'm comparing the same conditions independently in the same subjects based on their pre treatment and post treatment session. This is why I used paired tests, would that still be incorrect?

At a single subject level, use independent test to compare between multiple trials of condition "pre" vs. multiple trials of condition "post".
At the group level, use paired test to compare the subject-level averages of condition "pre" vs. condition "post".
https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Constrained_cortical_sources

Hmm I did use the process source buttons, but under the stat tab, it allows me to view the t-test values across time, and it is these values I find are flat. I will post pictures if I can't seem to fix it.

Screen captures would help us understand what you did and what you got.
Press Alt+PrintScreen on your keyboard to make a screen capture of the active window (unless you are using a mac...), then paste it directly in the text editor of the forum.

Perhaps this can't be answered, but I have strong significant effects at the sensor level (using Fieldtrip and their cluster method), but don't get anything significant at the source domain (using the same cluster method as implemented in Brainstorm). Is there any reason to be concerned if one can see strong effects at the sensor level but not the source level?

Cluster-based correction are very sensitive to the sampling along the dimensions that can be clustered. The type of correction you get with this cluster approach in sensor space or in source space are completely different...

Statistics at the source level are complicated, especially for unconstrained sources.
I will let the experts reply: @pantazis @Sylvain @John_Mosher @leahy @hossein27en ?

Hi Paul,

A few more comments to add to the conversation.

-You should use as many trials as possible to estimate the noice covariance to get a better estimate. Also, avoid any biases (e.g. estimating noise covariance only from one condition and then applying to both)

-In different sessions, noise conditions may be different so you may want to estimate 2 noise covariance matrices, one for pre- and one for post-. Then do inverse imaging separately for each condition.

-Indeed as you say, you should be using a paired test for pre- vs. post- comparisons of same participants.

-I agree with Francois that using constrained source maps will be simpler to use. Unconstrained sources combined with z-score and flattening seems to be a more convoluted approach than necessary (I have not used it myself).

-Perhaps you may want to apply some spatial smoothing in your cortical maps to improve statistical power in case subjects do not precisely align?

-Important advice: Cortical activation maps tend to vary a lot depending on local shape, and large clusters are difficult to get because activity changes as we move from neighbouring gyri to sulci etc. I would encourage you to use cortical orientation-constrained cortical maps and compute region-of-interest (scout) time series for interesting brain areas. Then compare these time series between conditions and you may have higher chances of finding experimental effects.

Best,
Dimitrios

Thank you both Francois and Dimitrios for your very valuable feedback. If I may build off this already started conversation, I had a few more questions I was hoping for your input on:

  1. In regards to the stats-related question about finding effects at the sensor level but not in the source space, I came across this paper (https://www.sciencedirect.com/science/article/pii/S1053811912009895?via%3Dihub) and if I may quote it:

Due to differential sensitivities of sensor and source space analyses it is sometimes the case that a particular effect is significant in one but not the other. When an effect is significant at the sensor level with all the proper corrections for multiple comparisons and the hypothesis is about the existence of an effect rather than about a specific area being involved, it could be acceptable to only report the peaks of a statistical map at the source level without requiring correction for multiple comparisons over the whole brain. When an effect is significant at the source level corrected for multiple comparisons and the choice of time and frequency windows for the source analysis can be motivated a-priori, a sensor-level test is not necessary. What should be avoided is doing a sensor-level test without proper MCP correction and using it to motivate a source-level test that achieves significance. This would constitute double-dipping (Kriegeskorte et al., 2009) similar to using peaks in the data to constrain a sensor-level test.

For my analysis, because I first did an EEG sensor analysis (corrected for multiple comparisons), would it be okay then to not correct for multiple comparisons in my case? What would constitute 'the peaks of a statistical map at the source level'?

  1. I'm a bit confused as when to apply absolute values and smoothing. I'm working on a different dataset where I have the MRI's of the participants and thus (and at your recommendation) have used constrained sources. If I may just briefly describe my workflow (following the Brainstorm workflow as best as I could for the ' Constrained cortical sources' for the ultimate goal of doing statistical testing between two independent groups), once I got the average source file for each participant I:
  • normalized with Z-score with respect to the baseline (no absolute value)
  • applied the default smoothing option to each individual's average file
  • I then rectified (absolute value) each individual's file
  • I then projected each individual's rectified source file to the template anatomy
  • I then use these resulting files (under the the 'intrasubject' tab) for statistical testing (two groups; independent t-tests in my case).

I have a feeling I've done the rectification and smoothing done at the wrong stage...

Thank you again for all the help.
Paul

@pantazis?

normalized with Z-score with respect to the baseline (no absolute value)
applied the default smoothing option to each individual's average file
I then rectified (absolute value) each individual's file

I have a feeling I've done the rectification and smoothing done at the wrong stage...

The smoothing step you are talking about is a spatial smoothing along the surface? This should be done on rectified source maps, as indicated here:
https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Constrained_cortical_sources

Why have you decided to do it before?
Is there something confusing in the documentation?

Hi Francois,

that was my mistake, I think I misread this post (Statistics on sources) and did not see the option checked 'use absolute values' in the smoothing option, and thus thought smoothing was done prior to the rectification. So my workflow now is:

  • normalized with Z-score with respect to the baseline (no absolute value)
  • I then rectified (absolute value) each individual's file
  • I then projected each individual's rectified source file to the template anatomy
  • applied the default smoothing option to each individual's average file
  • I then use these resulting files (under the the 'intrasubject' tab) for statistical testing (two groups; independent t-tests in my case).

That looks to be similar to the workflow page.

Thank you again!
Paul