Source estimation options

Hi everyone,

I am new to brainstorm. I have several questions regarding source estimation options.

I have run dSPM, sLORETA and beamformers on my own MEG data to compare all three approaches. The aim is to perform a group study and statistics for the main effects.
There are some options described in the tutorials after the solution to the inverse problem and I would like to know whether these apply to all methods or just dSPM and sLORETA.

First of all, regarding lcmv estimation, if one performs it on only one file then the recommended noise covariance regularization option is the diagonal noise covariance. However, the tutorial recommends the median eigenvalue which is only the selected option if one works with batch processing.

Also, as far as I understand Z-scoring (Standardize > Baseline normalization", [-100,-1.7]ms, Z-score transformation) is not required for dSPM and sLORETA. If so, is this also true (i.e. does not apply) for LCMV? I read that the PNAI which I applied scores analogously to z-scoring.

Also, I skipped filtering the source files since I am not entirely sure of whether this should be applied.

Finally, I flattened the cortical maps (process: Sources > Unconstrained to flat map).

I would consider smoothing the cortical maps if needed.

Are these steps ok?

I tried performing statistics in brainstorm right after flattening the cortical maps, but I always get error messages e.g. The dimensions of file #2 do not match file #1, No data read from FilesA. Can this be fixed somehow? Does this depend on the previous steps or something that I did not do?

Finally, I exported them to spm for statistical analysis, where I resliced them. If i do not smooth the cortical maps in brainstorm is it ok to smooth them in spm after the reslicing?

Thank you in advance
Haris.

First of all, regarding lcmv estimation, if one performs it on only one file then the recommended noise covariance regularization option is the diagonal noise covariance. However, the tutorial recommends the median eigenvalue which is only the selected option if one works with batch processing.

This depends on how many time points you have for the estimation of this noise covariance. Below a certain number of safer, it is not possible to estimate a proper noise covariance and therefore it is safer to use only the diagonal values. If the number of samples is sufficient, use the default option.

Also, as far as I understand Z-scoring (Standardize > Baseline normalization", [-100,-1.7]ms, Z-score transformation) is not required for dSPM and sLORETA. If so, is this also true (i.e. does not apply) for LCMV? I read that the PNAI which I applied scores analogously to z-scoring.

I am not familiar with this LCMV measure.
Maybe we can get an answer from the author of the code: @John_Mosher, @Sylvain, @leahy?

Also, I skipped filtering the source files since I am not entirely sure of whether this should be applied.

Low-pass filter? Don't filter if you don't need to.

Finally, I flattened the cortical maps (process: Sources > Unconstrained to flat map).
I would consider smoothing the cortical maps if needed.

There is probably no smoothing needed with unconstrained sources.

Are these steps ok?

For anatomical localization of sources yes. For frequency or connectivity analysis, prefer constrained source maps, are the method for processing unconstrained sources are not ready.

I tried performing statistics in brainstorm right after flattening the cortical maps, but I always get error messages e.g. The dimensions of file #2 do not match file #1, No data read from FilesA. Can this be fixed somehow? Does this depend on the previous steps or something that I did not do?

All the files need to have exactly the same dimensions (same number of time time samples, same anatomy).
If you can average the files, you should be able to run statistics on them.

Finally, I exported them to spm for statistical analysis, where I resliced them. If i do not smooth the cortical maps in brainstorm is it ok to smooth them in spm after the reslicing?

This is up to you, smooth as needed by your analysis.

At the present moment is the method for processing unconstrained sources ready for connectivity analysis? I need to use it for my cerebro-cerebellar connectivity analysis.

Kind regards,
Konstantinos Tsilimparis

We are currently revisiting the way unconstrained sources are handled in the connectivity functions.
@Raymundo.Cassani is working on a new connectivity tutorial that will explain all these aspects.

@Raymundo.Cassani I kind of lost tracks of the recent conclusions, and I'm not sure whether we still need to update things in the code or not...
Can you please explain here how to obtain the optimal results with current functions?
Thanks

Also, for connectivity analysis does it help to flatten the cortical maps (process: Sources > Unconstrained to flat map), since you get one positive parameter out of the three direction parameters? Or this flattening is only needed for the statistical analysis (e.g. check between subject differences: norm(z_score(A))-norm(z_score(B))=0) and it doesn't affect the connectivity analysis (e.g. using metrics such as mutual information to measure connectivity)?

I appreciate your help!

Also, for connectivity analysis does it help to flatten the cortical maps

This was part of the questions we explored in the past few months.
It seems now clear to us that the answer is NO.

The PCA option degrades the quality a lot (you're better off with a constrained source model), and the NORM option is not suitable for connectivity analysis (does not preserve the spectral contents of the signal).