Source modelling and subcortical structures

Mattiher,

I did not go through the long message history, but here are some thoughts that may answer some of your questions.

In Brainstorm, when you compute a dSPM solution by default you obtain what we call ‘dSPM-unscaled’. This allows us to easily manipulate data by linearly combining trials (e.g. averaging) and still keeping track of the underlying distributions. When you want to visualize the actual dSPM, you should run the process “sources->scale averaged dSPM” which will multiply the source maps with sqrt(Leff), where Leff is the effective number of trials. This multiplication will convert the maps to have z-distribution in the baseline.

wMNE with z-score normalization should give you something similar to dSPM but not exactly the same. This is because this normalization uses the baseline time series to estimate the standard deviation, so it depends on how many time samples you have and is not as precise as the dSPM estimation, which uses the noise covariance. Further differences may arise by manipulating the data for example with low-pass filters. In such case the baseline time series will have reduced variance and the z-normalization will give you different results.

When you extract scout time series, there are different ways to summarize activity within the scout. The default is ‘mean’, which will average (with some care for opposite signs) the time series of all the sources within the scout. Such average will lead to reduced variance, and thus your scout time series will no longer be z-distributed in baseline. Because sources within a scout are heavily correlated in time, we cannot simply normalize them with the number of averages to keep them z-scored. The only way to normalize them again is to then explicitly z-score-normalize the scout time series.

Best,
Dimitrios