Visualization of ICBM152 template and smoothing

Hi BrainStorm-Users,

I recently changed the standard-brain template for my source localization from Colin27 to ICBM152. I now realized that, if I smooth the brain for visual inspection of my sources (surface-tab, smooth = 100%), the smoothing does not work properly. There are some edges/nooks spread over the cortex especially near the borders (I attached a picture). I am wondering if I did something wrong or if this is a visualization problem by the software.
Does that have any influence on my source localization?

Thanks for any suggestions!
Maren

Hi Maren,

This smoothing algorithm does not work well for some configurations of triangles. It causes those spikes you see when you smooth a lot.
You would see a few of them on most of the surfaces generated with FreeSurfer and downsampled with Matlab.
I will try to add some correction step to fix the normals of the vertices with this singular topology at some point.

This is not causing any problem in the source reconstruction. The smoothed surfaces are only used for the visualization of the sources.
For now just ignore these spikes when exploring data, and try to find view angles and smoothing factors that minimize them to produce nice figures.

For improving the quality of your figures, you can also re-project your source maps on higher resolution surfaces.
With the 150,000 or 300,000 vertices, you would not see those spikes anymore.

Cheers,
Francois

Hi Francois,

Thanks for this fast response!

Maren

Hi Francois,

I have another question. I was reading a lot in the threads here but I just want to clarify somehting. I used dSPM for my source localization and I want to extract scout time series now to statistically compare my two groups in SPSS. If I extract the scout time series the algorithm already flips (most of the) non-dominant signs, right? So is it necessary to use the absolute values (the option “mean_norm” in the scout tab) or can I use the relative values?

For MNE you suggest to calculate the z-scores of the absolute values. But z-scoring or normalization in general is not necessarily usefull for dSPM, am I right?

Thanks a lot!
Maren

Hi Maren,

If I extract the scout time series the algorithm already flips (most of the) non-dominant signs, right? So is it necessary to use the absolute values (the option "mean_norm" in the scout tab) or can I use the relative values?

Those two questions are relatively independent.

  1. Brainstorm flips the sign according to the dominant orientation of the scout, not according the sign. If a scout spreads over the two sides of a sulcus, the values of the sources are positive on one side and negative on the other, so without this flipping the average of those values would be zero. The sign flipping performed in bst_scout_value.m extracts the dominant orientation of the scout, then changes the sign of all the sources that have opposite orientations (the orientation being the normal to the cortex).
  2. The function you select for each scout defines how the time series of multiple sources are combined into one. "Mean" averages directly the values, and returns signals oscillating around zero. "Mean_norm" averages the absolute values of the signals and return signals that are strictly positive. I would not recommend the use of this function for constrained sources.

For MNE you suggest to calculate the z-scores of the absolute values. But z-scoring or normalization in general is not necessarily usefull for dSPM, am I right?

Correct. The dSPM solution is a normalized version of the wMNE inverse operator based on the noise covariance.
If you calculate only the wMNE solution, you need to normalize it manually based on a baseline you define manually.
The dSPM results should be similar to (wMNE + Z-score).

Cheers,
Francois

Hi Francois,

Sorry for all the questions, this topic is confusing for me.

Do you mean by saying that you would not recommend the mean_norm function for constrained sources that I should not use absolute values in general to proceed with my statistics? Or was this specific to this function?

If I extract the values oscillating around zero, do I proceed with those values or do I need the absolute values to compare my groups? This makes quite a difference in magnitude and I am not sure which is the right way.
But I couldn’t find an answer for that problem right now.

Thanks!
Maren

No need to apologize, this is a very confusing topic.
The main problem is that the meaning of the sign of the minimum norm values is difficult to understand. It represents the “current orientation”, which is a measure we are not familiar with and that is most of the time not very relevant in the analysis.
However we need to keep the signs all along the analysis to preserve the dynamics of the signal (the oscillations around zero / the phase information), and to average out the noise between trials.

Within one scout, for constrained sources, there is no reason to use an absolute value to group the sources time seris. It would damage the dynamics of your signal for no particular reason.
Then after you calculate the scout time series, it can be interesting to apply an absolute value on it, to convert its interpretation from “current orientation” to “activation”. You can do this only if you are not planning to do any frequency or connectivity analysis on the resulting signals.

For processing sources in general, they are many different responses, depending on the case and what you are trying to do.
To compare between subjects for which you have the individual MRIs, you have to re-project all the source maps on the same default anatomy. Because the shape of the brain varies across subjects, some subjects might show a negative value where others show a positive value. It doesn’t mean that there is a difference in the brain activity, it could be just because of the differences in the folding configuration. In this case, you want to average the absolute values of the sources, to prevent the activity patterns of interest to cancel each other in the average.
Now if all the subjects are using the same template anatomy, you don’t have this problem anymore, you can keep the relative values.

If you calculate differences or t-tests between subject groups or conditions, you may want to test for the absolute value of the average (abs(mean(A)) - abs(mean(B)). This is the way to reveal correctly what is “more activated” in group/condition A than in group/condition B.

If you are planning to run any form of time-frequency decomposition, frequency or connectivity analysis, you cannot take the absolute value of your sources because you would lose completely the information of interest.
Instead, you should to run the analysis trial by trial on the sources (TF decomposition for instance), then average/test the power or the magnitude of the metric you used across groups/subjects.

Does this help?
Francois

Thanks a lot, this helps me much! I really appreciate your fast and so detailed answers!

I think I understood that I could go with either the relative or the absolute values because I use the standard anatomy for all my subjects and because I want to focus on activity differences (and right now not on frequency specific differences). But I might go for your reasoning with the abs(mean)!

Thanks!