Questions on tPAC process

Hello,

I am trying to run tPAC analyses on brainstorm using MEG data and I am stuck at a few points, I hope you can help me with some steps as I am fairly new to this field.

In my analysis steps, I am splitting my data into 2 different times (same condition, different times). One epoch is 5s long, the other is 9 seconds long. My first question is concerning the baseline time for DC offset; in the tutorials, it was mentioned to use the first 100ms as a baseline. However, I don't have a clear baseline (no pre-stimulus), is it ok to use the whole epoch duration?

My second question is whether the data covariance matrix used for the beamformer needs to be different for each split? I also want to compare the 5s epoch between tasks (different conditions). I am also wondering whether I need to use different data covariance matrices for each condition, or is it valid to use a single data covariance matrix estimated from grouping the conditions together?

My next question concerns the constrained versus unconstrained methods when computing sources using beamformer. I'd like to know how important it is to use the unconstrained method (which was recommended in the tutorials) when I have individual anatomy files. For the purpose of my project, I don't think it is necessary but would like to hear about your opinions.

Once I finish computing the sources, I want to run tPAC in source space using scouts, and I have received counsel to use surrogate data values to z-score the output values. However, I think this option has been removed in the new updates (?) and I am unable to find how to generate surrogate values. If it works, then my next steps would be in this order:

  • run tPAC analysis in source space
  • generate surrogate data (from the data that I use for this analysis)
  • z-score transform tPAC output values WRT surrogate data
  • perform statistical tests

Lastly, I am wondering about the comodulogram generated from tPAC maps; what does the measure "level of coupling" represent, and how can I reach it in the output file?

Many thanks!
Linx

However, I don't have a clear baseline (no pre-stimulus), is it ok to use the whole epoch duration?

It could be an option.
Other option: no DC correction and a high-pass filter at a low frequency.
https://neuroimage.usc.edu/brainstorm/Tutorials/ArtifactsFilter#What_filters_to_apply.3F

My second question is whether the data covariance matrix used for the beamformer needs to be different for each split?

You need to specify what is your data of interest, in order to create the appropriate spatial filter.
If you're not sure, use a minimum norm approach, this does not require the definition of a data covariance, and will probably be easier to handle. This is what we recommend using in all our tutorials.

I'd like to know how important it is to use the unconstrained method (which was recommended in the tutorials) when I have individual anatomy files

We recommend mostly the use of constrained models, especially if you have the anatomy of your subjects. Unconstrained source maps are difficult to process, and not handled correctly in many parts of the software.

Once I finish computing the sources, I want to run tPAC in source space using scouts, and I have received counsel to use surrogate data values to z-score the output values. However, I think this option has been removed in the new updates (?) and I am unable to find how to generate surrogate values. If it works, then my next steps would be in this order:

   run tPAC analysis in source space
   generate surrogate data (from the data that I use for this analysis)
   z-score transform tPAC output values WRT surrogate data
   perform statistical tests

Lastly, I am wondering about the comodulogram generated from tPAC maps; what does the measure "level of coupling" represent, and how can I reach it in the output file?

For this part, we need help from @Samiee and @Sylvain.

Hi Francois,

Thanks for your clear answers! I'll apply your recommendations for now :slight_smile:

Linx