I am working on some EEG data, and when computing the sources with sLORETA I obtain a generalized high activity at the deepest sources, which I suppose being driven by noise. This is reduced when using the MNE approach, which is known to be biased toward superficial sources.
I noticed that this phenomenon might be attenuated by changing the regularization parameter. Comparing values in the range from 3 to 0.5, smaller parameters reduces the (supposedly noisy) signal. This works to a certain lower limit, at which the overall signal is canceled.
It actually seems that reducing the regularization parameter makes the localization smoother, leading to more distributed activity.
Is this a sensible approach for selecting an appropriate value?