Hello Aurore, (and by way of an email question, Christian, Jean Michel, Victor, and Samuel),

The first step of using the dipole scanning is building the imaging kernel, as linked above by Francois. The key equation is found around line 1000 of bst_inverse_linear_2018.m,

```
for i = 1:NumDipoles(kk)
ndx = ((1-NumDipoleComponents(kk)):0) + i*NumDipoleComponents(kk);
%Kernel(ndx,:) = A(i).Va*(Lambda*A(i).Sa)*inv(Lambda*A(i).Sa + I)*A(i).Ua';
Kernel(ndx,:) = A(i).Ua';
end
Kernel = Kernel * iW_noise; % final noise whitening
```

The commented line is for reference, showing what the full estimate of the dipole moment would be, if you applied the pseuodinverse of the single dipole model to the data. What we calculate instead uses just the orthogonal matrix Ua, as an inner product of the whitened data. The result is effectively a z-score of the model with the data. We use this approach for both LCMV and dipole modeling (generalized least-squares, GLS). Hence for the LCMV, we call this a pseudo-NAI index, since the NAI is calculated a bit differently.

And since this is NOT strictly a "z-score", I used the more generic term "performance," but basically you can consider it a variation of z-scoring. And it is calculated time slice by time slice, to be comparable to the other techniques.

We can image this z-score just like any other of the images similarly calculated, such as DSPM and sLORETA.

In the second step, in the process box, we run "dipole scanning" to take the next step in modeling, which is to find the best point in this scan (and we allow either LCMV or GLS). At this best point (highest z-score), we finish the fit of the dipole at this point. The process is best shown in the phantom tutorial

http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta#Dipole_source_estimation

If you click on an individual dipole, the "dipole info" panel comes up, showing the stats of the dipole, including goodness of fit, amplitude, orientation, intensity, and chi-square.

Note that we have not implemented "confidence volume," which is included only if you directly load results from the Neuromag. To calculate the confidence volume, we need gradients of the source model, which is still on the list of good ideas to be implemented; with the range of head models offered, we don't have a simple path forward yet to readily calculate any gradient of any head model. We instead infer confidence volume from the "blur" of the z-score image, as is done presently by other methods.

Note, while the stats for the phantom came out as to be expected, I have found human data sets for which the stats are puzzling, and I haven't been able to trace down why. But the algorithm appears vetted by the phantom data, which was acquired under quite noisy conditions.

As to the thresholding of dipoles based on "goodness of fit," these "z-score" scans have already pre-whitened the data, so that the scan measure is an analog of z-scoring, and therefore anything >> 2 is "significant." If you right-click on one of the source imaging kernels (including the dipole), you can activate "model evaluations" "save whitened recordings" to see directly the implied whitened recordings used in the inverse. If the noise is correct, then the signal-free regions of your data will be roughly +/- 2 (standard deviations).

The chi-square and the reduced chi-square in the dipole info screen give you a quantifiable measure of the residual error for the model, calculated from these whitened residuals. In theory, chi-square equals the number of degrees of freedom; in practice, try to stay below a reduced factor of 2.

- John

Edit: And since this is **NOT** strictly a "z-score", . . .