Compute Sources in different conditions

Hi Francois,

Since a recent 2018 update, the workflow for computing sources seems to have changed. It used to be the case that the following workflow could be used:

  1. right-click subject and select ‘compute head model’
  2. compute noise covariance from ‘overlapping spheres’ in a single raw file
  3. compute sources on this single file, and the little brain with ‘dSPM’ would propagate to all files. So, for instance, if I had a “speak” condition and a “listen” condition, and I have averages of each of those conditions, then each average would also have a little dSPM brain with the sources computed.

It now appears that the recommended protocol is to compute noise covariance and sources for each average file (i.e. to repeat all steps for every condition). Is this actually the case?

Thanks!

Sarah

What makes you think this is the case?
It is recommended to estimate separately the noise covariance matrix and sources for each subject, but withing one single subject, you should use as much data as you can for estimating the noise covariance matrix (and you should estimate it from the continuous recordings or the epoched trials, not from the averages).

We do not recommend sharing the channel file across multiple runs because the cleaning procedure (SSP/ICA/bad channel selection) might change between runs. But even in this context, you can compute the noise covariance from multiple runs, you just need to select all the files at once before selecting the menu "Noise covariance > Compute from recordings".

If you have doubts about your database organization, please post screen captures of your Brainstorm database explorer.

Cheers,
Francois

What makes you think this is the case?

The tutorial (https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation?highlight=(dspm)) online shows computing sources on an average file and does not show a single computation that has automatically propagated to the average.

It used to be the case that I would only need to compute noise covariance and then compute sources on one raw file, and then that little brain would propagate down through any average I had computed. I've attached a screen shot here. You can see that the sources were computed for a raw file, but the dSPM link doesn't appear under other raw files (first screen shot) or the average (second screen shot) below. I need to compute sources for every raw file and average separately. I should note that all of these raw files in my database are from a single very long recording.

Thanks!

Sarah

I was only able to include one image in the previous post. Here's the second.

Hello

The behavior only depends on the way you configure your subjects when you create them. If you select the option “Use default channel file: Yes, use one channel file per subject”, it would behave the way to describe. If the channel file is shared across folders within one subject, the forward and inverse models are also shared.

But I don’t recommend you do this. You might lose some information important for the source modeling at the level of the run (if you have SSP projectors or ICA components selected), and possibly you would mix different head positions. In previous versions of the online documentation, we were not that cautious about these issues.

The most accurate option is to average source maps across runs, not to compute the sources of the average. As illustrated in these new tutorials about group analysis:
https://neuroimage.usc.edu/brainstorm/Tutorials/VisualSingle#Source_estimation
https://neuroimage.usc.edu/brainstorm/Tutorials/VisualGroup#Subject_averages:_Famous.2C_Unfamiliar.2C_Scrambled

You can compute averages of recordings across runs, but it’s not recommended to estimate the sources for these averages, unless 1) you didn’t use any SSP/ICA cleaning, 2) the list of bad channels is the same for all the files, 3) you used MaxFilter to register all the acquisition runs on the same head position (check this by selecting all the channel files > Display)

Is this a bit clearer?
Please ask if you need further explanations.

Francois

Oh I see now! Thanks so much; this cleared everything up.

Sarah