Hello BST Team,
I am a PhD student who started my research in neuroscience two months ago. I started to learn about M/EEG source localisation and BST has indeed helped me in understanding the forward and inverse modelling concepts with the practice datasets. I have gone through your tutorials until Source Estimation. I have few very basic question, hope you guys don't mind me asking these basic details.
- I have gone through sLORETA, Pascual Marqui et al, paper and understood how the standardization happens in sLORETA (computes the covariance matrix of sources and divides/standardizes each voxel by the respective diagonal element of covariance matrix). But I am unable to conceptualize the noise covariance matrix because, The noise covariance matrix is numChxnumCh. How can you divide/standardize each voxel where #voxels > #diagonal elements of noise cov matrix.
- noise covariance matrix I intended to use is "No Noise Modelling(Identity matrix)". The tutorial says, "using this noise at the sensor level would be assumed as homeskedastic", how are you going standardizing using this noise model in dSPM or in sLORETA.
- I assume, when empty recordings of the MEG room are given, the will be source localised, acquiring the source time series, getting the source cov matrix and use this to standardise the current density map just as explained in (1). Am I correct? If not can you explain me how does this empty room noise recordings are used to standardize the current density map.
- I read Sensor Weighted Overlapping Spheres model paper that you cited in your tutorials and understood how each sphere is fitted (algorithm clearly explains this). I also learnt that dipoles are present inside the spehere and my question is, as I am using 15,000 dipoles and say I have 275 channel MEG system, which means I will have 275 overlapping spheres and how will these 15,000 dipoles spread across the 275 spheres? How are they projected back onto the MRI of the subject.
- Although, I understood sLORETA, I was bit confused with LORETA. MNE is incapable of pointing the deep sources and LORETA overcomes this disadvantage. It is said that LORETA uses Laplacian which overcomes the disadvantage of MNE, I did not understand how using a laplacian overcomes this disadvantage. Can someone explain me?(tried looking at mathematical equations but they were too overwhelming).
Once again apologies for these basic questions, I just want to understand what I do at every step in source localisation.
Many thanks in advance for taking time to answer my questions.
Regards,
Pad.