Extracted Values don't correspond to what is displayed in the scout time series (SMA)

,

Hello,

I'm working on MEG data and currently processing statistical tests.
I have 4 scouts : SMA and Cerebellum, being both bilateralized. The period recorded is from -1000ms to 999ms for all scouts.
As Brainstorm can't perform a lot of statistical tests, I decided to extract the values from these 4 scouts for all my participants (2 groups of 16 participants), and stored them in an excel file (there was probably a much faster way to do it...) to then process it on Python.

At first glance, it seemed to me that the values extracted from both SMA and cerebellum were actually the ones being displayed in time series (in Brainstorm) for all the participants.
In the same way, looking at the average of the time series for one group of participants and the corresponding extracted values also seemed to be correct.
However, when it came to average by time period, e.g. from 0,000 to 0,35ms, the result of the averaging for sure didn't correspond to what was displayed as time series. For example, a simple average on this period in excel for the right SMA led to a value of 2.66 pAM, whereas looking at the graph time series, the amplitude was never superior to 1.
Surprisingly, the values for the cerebellum seemed to correspond perfectly.

I tried to extract the values changing various parameters, but my problem stays the same: the extracted values for the SMA seem not to correspond to the actual activity.

Hence I'd like to get some clarifications to why do I get this bizarre output for SMA and not the cerebellum (I assume something related to the source type?)? and how can I fix this?

Thanks in advance for your help,

Mihoby

Please post screen captures illustrating what you describe:

  • all the source signals you refer to,
  • the database explorer showing the corresponding source files,
  • a 3D view showing all your scouts
  • the option window of all the brainstorm processes involved
  • a description of how you extracted and exported the scouts time series

Thanks

Hello and thanks for your rapid answer François (et désolée pour mon délai de réponse…)

I actually quasi completely followed the deep brain atlas tutorial, but I only choosed the bilateral cerebellum that I merged with the cortex.
The imported MEG data were already pre-processed, it's a MEG recording from -1000ms to 999ms, where 0,0 ms correspond to the execution of the movement.
The solution of the forward problem has been evaluated using the overlapping spheres method based on the mixed source model. The noise covariance matrix has been computed from recordings with an interval starting from -1000.0ms to -500.0ms (where nothing theoretically should happen). A minimum norm method has been used to solve the inverse problem. Sources were then projected to the mixed model.
Here are the 4 scouts I defined:
bilateral SMA
bilateral Supplementary motor area (SMA)
bilateral cerebellum
bilateral cerebellum

I averaged the scout time series on 3 periods (baseline -1000 to -650ms; pre -350 to 0ms; post 0 to 350ms) for each participant (16 individually) but also for each group (average of the 16 then the averages by period). For example, this gives this for one participant:
subject3ex
In the displayed order:
image

  • the whole scout time series
    s3period
  • averaged on a baseline period
  • averaged on a pre period
  • averaged on a post period

And this gives this for the whole group (16 participants averaged in it):
avgdi
In the displayed order:
image

  • the whole scout time series
    s3period
  • averaged on a baseline period
  • averaged on a pre period
  • averaged on a post period

At this point, it appeared to me that for the SMA (left and right) the average values might not reflect the real averaged activity of the corresponding period at least for the individual output (the average one seemed ok), on the contrary to the cerebellum where it always seems pretty much correct.

I then decided to extract manually the values for each participant, condition and scout from -1000ms to 999ms, doing Extract Values and then export to matlab:
gamma3value

,then stored in an excel file.

Now, If I average as I did in brainstorm, for each Scout (bilateral SMA and Cerebellum), here’s what I get:
excelvalueswrong

If I compare it with the brainstorm output with the average values, it corresponds perfectly for the cerebellum left and right, but it doesn’t for the SMA left and right:
avghvperiod

That's my problem : SMA extracted values seem not to correspond to what is displayed in Brainstorm.

Thanks again for your help,
Mihoby

I'm sorry there's been a mistake, here's what is displayed for the whole group:

I accidentally uploaded twice what is displayed for a single subject

The imported MEG data were already pre-processed, it's a MEG recording from -1000ms to 999ms, where 0,0 ms correspond to the execution of the movement.
The noise covariance matrix has been computed from recordings with an interval starting from -1000.0ms to -500.0ms

Note that using you can't estimate the noise covariance correctly from an averaged file, therefore you can't expect to obtain optimal results with a Minimum norm solution.
https://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance

If you want to improve you model: import/epoch/average your data in Brainstorm. Otherwise, it's probably not worth the effort of trying to use the complicated mixed head models. Try with volume head models instead:
https://neuroimage.usc.edu/brainstorm/Tutorials/TutVolSource

bilateral cerebellum

You show surface scouts for the cerebellum. If you really applied a surface constrain to the cerebellum, it is completely useless that you use a mixed head model. Simply use the default surface source model presented in the introduction tutorials, your analysis will be a lot simpler.

At this point, it appeared to me that for the SMA (left and right) the average values might not reflect the real averaged activity of the corresponding period at least for the individual output (the average one seemed ok), on the contrary to the cerebellum where it always seems pretty much correct.

I'm sorry, this is still too fuzzy for me to be able to provide any help. In order to fix any bug or investigate some unexpected results, we need to be able to reproduce what you show. At the moment, your examples mix individual and group data, and it's difficult to understand when you talk about "average" whether you refer to the average withing the scout, the average across subject, or the average over time. And it's still unclear how you obtained these values you show in the tables ("Now, If I average as I did in brainstorm").

I suspect the following possible tracks for further debugging:

  1. There is some misunderstanding with the types of constrains you used in your mixed head model. I strongly recommend, at least as a first step, that you drop this approach. Start with a surface source model with unconstrained orientations, which is much simpler to handle. If later you think the cerebellum is not correctly represented, or simply to confirm your results, you may try with a volume head model. (obtaining similar results with two completely different approaches is always a good thing)
  2. There is some confusion in the order of the different steps leading to the scalar values you show in your tables: extraction of the norm of the 3 orientations, normalization of the source maps, average of multiple sources within a scout, average over time. These steps are NOT interchangeable and different processing streams might lead to different orders of these various steps, and different final results.
  3. You have some display modifiers (absolute values, filters...) you did not consider in your comparisons between the Brainstorm graphs and the values you read directly from exported files.

Please try to simplify your problem and come up with a very simple and reproducible example.
If you can reproduce your observations in a reliable way, then reproduce it on the dataset of the example tutorials. Finally post here the exact sequence of actions to do in order to reproduce your results. This way, we will have some stable ground and we will we able to evaluate step by step your results.