Computing statistics in source space

I have EEG data from 13 subjects and 2 different conditions. I have done all my analysis on sensor level. Now I want to perform time frequency power analysis on source data and perform group-level statistics between the two conditions. I have computed the head model and I've read the necessary tutorials on statistics, time frequency and source modelling. I have computed head model and dSPM unconstrained sources for all the subjects. Now, do I compute within-subject source average first and then run Time-frequency process? Or do I take the sources directly and compute time frequency to obtain one averaged source time frequency file, for each subject? After that, I'll project on default anatomy right? And then what pipeline do I follow to perform statistics between two conditions? I know these are given somewhere or the other and I've read them all. I've got some idea as well, as in scouts have to be computed and then their sources have to be looked into, etc. But can someone help me get a pipeline that I should follow, given my purpose? I'm still not feeling certain about how I should go about it. Please hep. Thanks a lot, in advance!

You need to be careful with the way you handle unconstrained sources in the context of statistical analysis. See the tutorial Difference: https://neuroimage.usc.edu/brainstorm/Tutorials/Difference#Unconstrained_sources

Now, do I compute within-subject source average first and then run Time-frequency process?

No, never compute time-frequency decompositions on averages. Average the TF of single trials:
https://neuroimage.usc.edu/brainstorm/Tutorials/TimeFrequency#MEG_recordings:_Single_trials

Or do I take the sources directly and compute time frequency to obtain one averaged source time frequency file, for each subject?

Source maps (links) of individual trials => Time-frequency => Average (can be the online average of the Morlet wavelets or Hilbert processes, it avoids creating hundreds of useless files).

One option is to process only some specific scouts instead of the full source maps, in order to keep the size of the data generated (and the number of multiple comparisons to correct for!) reasonable.

After that, I'll project on default anatomy right?

If processing full source maps: Projecting to the template and then doing the TF analysis might be easier to handle, but I'm not sure. If you want to project on the template the TF values, you need to keep the complex values instead of saving only the power.
Both approaches should give the same results, the two operations are permutable.

If you use TF of scouts: no need to project to a standard space, you can directly compare the values between subjects.

And then what pipeline do I follow to perform statistics between two conditions?

This can be tricky, especially with unconstrained sources.
Non-parametric permutation tests should work with any kind of data in input, but the effects of interest still have to present in the data you are testing.

1 Like

Thank you so much Francois!
I followed your suggestion and deleting the unconstrained sources, I computed constrained ones. Then I took the source files of all the trials and computed one averaged multitaper TimeFrequency file for each condition in each subject. Normalised those files and then tried to project these within-subject TFaverages onto default anatomy but kept getting the following error:

Error using tess_interp_tess2tess (line 106)
One surface has an atlas "Structures", the other does not. You need to use surfaces coming from the same software to project sources.

Error in bst_project_sources (line 163)
[WmatSurf, sSrcSubj, sDestSubj, srcSurfMat, destSurfMat, isStopWarped] = tess_interp_tess2tess(srcSurfFile, destSurfFile,
isInteractive, isStopWarped);

Error in tree_callbacks>@(h,ev)bst_project_sources(ResultFiles,sDefCortex(iCort).FileName) (line 2638)
gui_component('MenuItem', jMenuDef, [], sDefCortex(iCort).Comment, IconLoader.ICON_CORTEX, [],
@(h,ev)bst_project_sources(ResultFiles, sDefCortex(iCort).FileName));

Then I read up and realised that since I want to perform group analyses and I had used individual subjects' MRI, I might have to do a coregistration using Freesurfer. But I didn't have any Freesurfer sphere and when I tried to download FSAverage file, I was shown a warning that I'll lose the individual MRI and surfaces, so I didn't take the risk.
As an alternative, I tried to compute a template source grid inside the group analysis folder and then compute new head models of MRI volume (my previous head models used cortex surface option) and then estimate sources, compute TF and then project on template source grid, in order to perform statistics.
In either of the ways, I'll need to compute head models and estimate sources again for each subject's each condition. I've referred to the following links for guessing the above ways out:

https://neuroimage.usc.edu/brainstorm/Tutorials/DefaultAnatomy

https://neuroimage.usc.edu/brainstorm/Tutorials/CoregisterSubjects

I'd like to know if there's any way to get rid of the initial error and project the estimated TF sources on default or standard anatomy for group statistical analyses.
If not, which way would you suggest me to take? Start from scratch with Free surfer and reanalyse everything again or should I go ahead with the source grid thing and project my sources on MRI volume model? My ultimate purpose is to perform statistics and make comparisons between my two conditions.
Please guide me through this. Thanks a lot!

One surface has an atlas "Structures", the other does not. You need to use surfaces coming from the same software to project sources.

I think the error message is very clear.
The list of templates in Brainstorm, and the programs they were processed with:
https://neuroimage.usc.edu/brainstorm/Tutorials/DefaultAnatomy

Then I read up and realised that since I want to perform group analyses and I had used individual subjects' MRI, I might have to do a coregistration using Freesurfer.

Or CAT, or BrainVISA.

But I didn't have any Freesurfer sphere

?
If you imported the FreeSurfer output, the registered spheres (?h.sphere.reg) are loaded in the cortex files by the menu "Import anatomy folder":

I tried to download FSAverage file

We don't really recommend using the FSAverage subject in Brainstorm. Prefer using the ICBM152 template for group analysis.

I'd like to know if there's any way to get rid of the initial error

Just use only compatible templates.
(Note that FreeSurfer and CAT are compatible: you can project on the FreeSurfer-based MNI ICBM152 template any anatomy processed with CAT - much faster than FreeSurfer)

Alternatively, do only ROI-based analysis, which is one way to reduce dramatically the complexity of the analysis and the volume of data generated. And the group registration problem is gone.

Hello Francois,
Thank you so much for the suggestions! I've resolved the issue and successfully projected the subject TF files on the default anatomy. Then I performed paired permutation test using process 2, keeping subject files of condition 'Object' under A and of condition 'Nonobject' under B. I used options of averaging time and frequency and used FDR correction for multiple comparisons. I obtained the result in the form of a cortex surface showing certain regions which are highlighted and these results are in accordance with what I expected. But how can I get more specific details about the stats? As in what are the p values, which time points or time range show most significant differences between the two conditions, and in which regions? I haven't used scouts so far. Now that I can see activity in my desired region, can I now specify a scout in that region in order get more details of my stats? Like, would that show me which time points are significantly showing activity in my region of interest.

My next query is regarding coherence between the two hemispheres of my region of interest. I'm interested in the temporoparietal region only. I want to look at the significant interhemispheric coherence in the temporoparietal regions, under the two conditions. Basically, I need to see the time points wherein coherence is varying in the two conditions. I had computed imaginary coherence at the sensor level and then extracted the values corresponding to my time window of interest and electrode pairs of interest using MATLAB and then performed stats separately. Now, I want to validate the same at the source level and get the exact sources of such activity. How do I go about it?
I cannot proceed in the same way as the TF source analysis as there isn't enough memory to compute such big files. I want to define one scout in one hemisphere and look at the coherence between that scout and the same region in the opposite hemisphere (scout 2) and know the time points of significant coherence increase or decrease in the two conditions by performing group-level statistics. Do I take the constrained sources of all the trials and then run coherence process? At which level should I define the scout? At source level, how do I specify the electrode (surrounding which I'll define my scout) for 1*N coherence? How to perform stats and manage my output? I've read all the relevant tutorials and forum discussions but didn't find much about source level connectivity, so feeling quite clueless about how to proceed. Please help.
Thanks a lot in advance! :slight_smile:

You can change the p-value threshold interactively with the Stat tab, or set permanently with the process Test > Apply statistic threshold.
The t-statistic are the values you see in the colormap. You can get more precise graphs showing the value over time by creating scouts and displaying their time series.

For all the other measures, you need to do it manually in Matlab, from pmap and tmap fields:

I want to validate the same at the source level and get the exact sources of such activity. How do I go about it?

Use scouts time series instead of sensors, and repeat exactly the same analysis.

I want to define one scout in one hemisphere

If you find a couple of ROIs in one of the atlases that corresponds to what you want: use these ones (not too large, otherwise the effect of interest might get lost in the averaging of all the signals within the scouts). Otherwise, draw them on default anatomy, project them on the individual brains, and adjust for each subject individually if there are points that are not projected correctly.

Do I take the constrained sources of all the trials and then run coherence process?

If you are only interested strictly on the coherence between two ROIs: you can use the tab Process2 with the same list of source files on the left and on the right (all the individual trials for one subject and one condition, constrained sources), and use the process Coherence AxB, select the left ROI for FilesA, the right ROI for FilesB (or vice versa). Repeat (or script) for the conditions, and all the subjects.
Then run your paired non-parametric stat on the resulting files.

At source level, how do I specify the electrode (surrounding which I'll define my scout) for 1*N coherence?

If you want to compute the coherence between one electrode and multiple scouts: use Process2 / Coherence AxB as well, selecting the recordings on the left, and the sources on the right. But I'm not sure I understand why you'd want to do this in this context.

1 Like

Apologies for the confusing sentence. No, I don't intend to look at coherence between electrode and scouts. I mentioned the word electrode because when I was computing coherence at sensor level, I had used 1xN coherence wherein I got the option of specifying the electrode with respect to which coherence with all other electrodes were computed. Here at source space I want to compute between two specific scouts, which are in homologous regions of the two hemispheres.
Thank you so much for the guidance, I'll go ahead and see how it goes.