Best source localisation method: tutorial vs literature

Dear all,

I have a concern about the "Source estimation" tutorial. In the "Source map normalization" paragraph it is recommended to use non-normalized current density maps in scenarios such as single trial analysis or computing shared kernel. However, there is strong evidence in literature (as also stated at the beginning of that paragraph) on the inaccuracy of this method compared for instance with sLORETA. Is it therefore being suggested to opt for lower accuracy in order to gain a more efficient computational approach, maybe?

Thank you in advance for your help

Ramtin

The idea would be that normalization/standardization can be applied a posteriori, for instance after several runs are averaged at the source level. dSPM and sLORETA can also be expressed as linear kernel operators, but the issue is with the proper estimation of data covariance on single trial data: not impossible but not entirely convenient either.

So in summary, we do recommend that a form of standardization be applied on source maps: this latter can be performed at different instances of the source mapping pipeline.

Many thanks for your help.

The data covariance is the matrix used to standardize the signal, is that correct?

In case of EEG resting state, that would correspond to an identity matrix. Therefore, whether standardizing at the single trial level or after any averaging process would not affect the outcome... Am I wrong?

In the tutorial it seems also to be suggested to perform connectivity analysis before standardization (i.e. using MNE). Is that still related to the data covariance issue?

EDIT: in case of connectivity analysis, the averaging step would occur only at the network parameter computation. Therefore I would end up pursuing an analysis based on non standardized values. I am not sure how this might lead to reliable results...