Source estimation in MEG

Dear Francois and all,

I am sorry to bother you. When I used brainstorm to estimate the source, I encountered the following questions. I'd be grateful if you would give me any advice.

  1. The data I use is resting data. When I want to estimate the source with LCMV, how should I choose the Baseline and Data time? In the tutorial(https://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance#Data_covariance), I see that you recommend using the full time window, but is it used as Baseline or Data?
    Can I choose recordings away from the peak of any identified interictal spike as Baseline and choose all the full time window as Data?
    图片30png
  2. In the tutorial(https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation),
    I see your recommendation:
    Use non-normalized current density maps for: Averaging files across MEG runs.
    Use normalized maps (dSPM, sLORETA, Z-score) for: Normalizing the subject averages before a group analysis.
    Therefore, when I perform source estimation between different runs of the same person, is it better not to choose sLORETA? However, when I only select current density map for source location and average between different runs, then use Z-score for standardization, how should I select the baseline? This problem also exists in the healthy control group. How should I choose the Baseline for the normalization of healthy people? Could you give me some advice on this matter? By the way, before source location, I took the data before and after 3s of spike as the analysis object...
  3. In the tutorial
    (https://neuroimage.usc.edu/brainstorm/Tutorials/VisualGroupOrig?highlight=(Intra-subject)#Sources-2) you recommend that when performing group analysis, we need to first extract the absolute values for the source maps and then project it to the default anatomy.
    However, during connectivity analysis, I need to downsample the source location results to the AAL template first and the downsampling results can no longer be projected to the default template, and the following errors will be reported:
    Cannot process atlas-based source files .
    Therefore, can I project the source location results to the default template first, then conduct down sampling, and then perform connectivity analysis. If such a process is carried out, should I not take the absolute value of the source location result, because in the tutorial(https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation), you suggest:
    we cannot simply discard the sign of these values because we need these for other types of analysis, typically time-frequency decompositions and connectivity analysis.

I'm sorry for so many questions. Any help with this would be greatly appreciated. Thank you!
Yours,
Liu Xinyan

  1. I see that you recommend using the full time window, but is it used as Baseline or Data?
  1. Can I choose recordings away from the peak of any identified interictal spike as Baseline and choose all the full time window as Data?

Yes, this is exactly what is recommended.

  1. Therefore, when I perform source estimation between different runs of the same person, is it better not to choose sLORETA?

If you want to observe the sLORETA maps, you can always use the sLORETA normalization at any stage. If you later average multiple sLORETA maps and compute a Z-score on them, it would cancel the sLORETA normalization and would be equivalent as if you have initially computed only the current density maps.
These recommendations are here for you to be aware that if you normalize later in the analysis (at the group level) the initial normalization is lost, so that you don't report on using sLORETA if you've used a Z-score on top of it.

However, when I only select current density map for source location and average between different runs, then use Z-score for standardization, how should I select the baseline? This problem also exists in the healthy control group. How should I choose the Baseline for the normalization of healthy people?

This is a problem. Using dSPM/sLORETA avoids this issue.
Otherwise, the same recommendations as for the noise covariance could apply.
If you want to compute a Z-score wrt a baseline from a different file, use the Process2 tab: Baseline in FilesA, files to normalize in FilesB.

  1. However, during connectivity analysis, I need to downsample the source location results to the AAL template first and the downsampling results can no longer be projected to the default template, and the following errors will be reported

If you want to work at the ROI level, this is much simpler, you don't need to project the full source brain maps on a template. Compute your connectivity matrices using scouts at the subject level, then run your statistical analysis across subjects directly on the outputs.

In general, avoid using the process "Downsample to atlas". Prefer using the process "Extract scout time series".

Dear Francois,

I'm sincerely grateful for your help.

In fact, I still don't know how to perform sLORETA or dSPM for normalization after averaging the estimation source of MEG collected in different runs.

For dSPM, I see that you mentioned in the tutorial

(https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation):

This should be used only for visualization and interpretation, scaled dSPM should never be averaged or used for any other statistical analysis.

For sLORETA, can I first average the data segments in time domain, and then use sLORETA for source estimation. Because, I saw you say in the tutorial:

sLORETA(Average(trials)) = Average(sLORETA(trials))

However, I always worry that if the data segments containing spikes are averaged, the spike amplitudes of different phases at the same time will be reduced. Is my worry unnecessary?

Best wishes,

Liu Xinyan

sLORETA: just compute the sLORETA maps and average them.
dSPM: scale the value only after computing your final average (this is for display/reporting only)

For sLORETA, can I first average the data segments in time domain, and then use sLORETA for source estimation.

Average by folder in sensor domain, estimate the sources, average across folders in source domain.
This is the solution that minimizes the possible manipulation errors.

Read the group analysis and workflows tutorials in case of doubts:

Dear Francois,
Thank you for your timely reply, which is very helpful to me.
Best wishes,
Liu Xinyan