Projecting unconstrained sources

This post attempts to summarize and begin explaining an issue recently observed by @hyerangjin when doing group analysis whereby she wanted to compare time-frequency-decomposed activity. She observed that when projecting sources to the template, then doing TF and averaging within a scout, the activity looked quite different from sensor-level averaged TF activity. She later did a different pipeline where the scouts were back-projected from the template to subjects and TF and scout-averaging done in subject space. That second approach gave much closer results to the sensors and to what was expected.

I've been inspecting the code and trying to simplify the steps to isolate the issue. I now believe the main issue is that the sources were unconstrained. When projecting to the template, it only asks whether the source space is cortex surface or MRI volume. With unconstrained, it seems to compute the norm instead of keeping the 3 orientations or using the normal orientation. Then of course the TF computation of the norm is not as expected.

When doing TF with unconstrained sources without projecting to the template, it looks like it computes the TF for each orientation and then averages them, or maybe that's because this is what we selected as the scout function processing option. It's not obvious it would apply that function to the 3 orientations as well as to the scout source locations.

This figure illustrates the description above. The first line is a single-vertex scout projected to the template (code modified to do single-vertex nearest neighbor interpolation), showing the time series (norm of xyz) on the left and TF on right. The second line TF is for the same scout before projecting, i.e. in subject space. 3rd line is the same scout time series and TF in subject space but after computing the norm. 4th line is same as 3 without computing the norm, so there are 3 time series and 3 TF plots (2nd shown, which is the largest and looks similar to the one on the 2nd line)

A few other things I noted while investigating this:

  1. Projecting scouts "erodes" them (scouts that share an edge become separated by a gap):

  2. The inverse distance weighted interpolation (Shepard's), is not implemented correctly.
    The weights are (1/d_i - 1/d_max)^2, where d is the distance. Because of the squaring, subtracting 1/d_max is not an equivalent formula. I'm not convinced my tests were correct but it seemed not to change anything at all doing this interpolation or strict nearest-neighbor.

  3. In general, using this interpolation method with 8 nearest neighbors means we're spatially smoothing a little when projecting sources. I would personally prefer if the smoothing was done separately (I believe there is a process for that) and the interpolation be as fateful as possible. I think there are good interpolation methods for fields on triangulations. They may already be used in Bst in surface manipulation functions? On the other hand, it might be better yet if from the very start we could project the template cortex points to the subject cortex and compute the sources directly at those. Then no source interpolation would be needed at all. Of course, it may depend on what we're doing and whether these warped points are still relatively evenly spaced in subject space. Is this currently possible?

2 Likes

Working with unconstrained sources is a pain, and a good share of our pipeline doesn't well work for it. I've been asking for many years for a review of these methods, but it will probably never happen.

What is done with unconstrained sources by the TF processes is documented in the introduction tutorials:
https://neuroimage.usc.edu/brainstorm/Tutorials/TimeFrequency#Unconstrained_sources

There are no questions in the first part of your message:
Is there anything that is not behaving the way it is described?
Or is there any missing documentation?

Projecting scouts "erodes" them (scouts that share an edge become separated by a gap):

Scouts are not really designed to overlap. In all the cortical parcellations we manipulate, one vertex belongs only to one ROI. When projecting multiple scouts simultaneously, the function bst_project_scouts.m matches only the label with the highest probability to each vertex in the destination surface.
If you want to reproduce your scout overlap (ie. one vertex belonging to multiple scouts) in the destination surface, you can project the scouts one by one (multiple independent calls to bst_project_scouts.m).

The inverse distance weighted interpolation (Shepard's), is not implemented correctly.
The weights are (1/d_i - 1/d_max)^2, where d is the distance. Because of the squaring, subtracting 1/d_max is not an equivalent formula. I'm not convinced my tests were correct but it seemed not to change anything at all doing this interpolation or strict nearest-neighbor.

Ooooops... It's not a pure Shepard's algorithm, I wanted to limit the number of nearest neighbors but still have all the weights summing to 1. I'm not sure how I ended up doing this, I can't really understand the code anymore... I suggest we leave this alone for now, as it's doing its job of "some form of exponential distance-based weighting" correctly. Modifying it would alter slightly a lot of results... not very good for reproducibility of older results.
La poussière est beaucoup mieux sous le tapis :slight_smile:

In general, using this interpolation method with 8 nearest neighbors means we're spatially smoothing a little when projecting sources. I would personally prefer if the smoothing was done separately (I believe there is a process for that) and the interpolation be as fateful as possible

When projecting between two surfaces, the nearest neighbor has weights > 0.9 for most vertices, and the last weights are insignificant. Look at the W matrix in bst_shepards.m. In practice, there is very little smoothing from this step.
What do you mean with "the interpolation be as fateful as possible"?

I think there are good interpolation methods for fields on triangulations. They may already be used in Bst in surface manipulation functions?

What are you referring to?

On the other hand, it might be better yet if from the very start we could project the template cortex points to the subject cortex and compute the sources directly at those.

This is what is done with the volume sources:
https://neuroimage.usc.edu/brainstorm/Tutorials/CoregisterSubjects#Volume_source_models

For surface-based models, the idea was to always do the reconstruction from the actual surfaces of the subject. Projecting 3D positions from template to the subject space is not possible with the FreeSurfere sphere-based method. It is possible to use the SPM canonical surfaces (MNI surfaces warped to the subject space using a linear MNI transformation), but this is a much cheaper solution...
But maybe I'm missing your point...

Wow. Thank you so much Marc for this thorough analysis and report!

1 Like

Thanks François,

I think this was the source of the unexpected results. I'm not sure why the 3 orientations would be converted to a norm in the projection step. At the very least this should be made more obvious, perhaps with a warning.

Agreed. If we change it, it should probably become another interpolation option.

That wasn't very clear. I mainly meant an interpolation method that goes through the initial points. I know the code deals with the initial points explicitly, but since the rest is a bit smoothed, it does so discontinuously.

What I mean is to resample the subject surface at the points in that surface that will correspond to the points of the template surface once normalized. So we don't exactly need 3D projection, it's similar as when projecting scouts and when resampling a surface. I'm not familiar with how Freesurfer does its normalization. I know it matches on spheres, but does it give a way to warp and unwarp arbitrary points on the surfaces? If not, the easiest way is to project each template sphere vertex onto the subject sphere triangle it belongs to, and then interpolate that point's position on the subject cortex tessellation. I dealt with that (how to add points to a tessellation that represents a curved surface) a few months ago, but thought you'd have this in Bst already since you have a bunch of functions for remeshing surfaces. That's what I was referring to by "interpolation methods on triangulations". If this is still not clear, maybe we can discuss at the next meeting. I think this would be a good thing to have.

Cheers,
Marc

Because of the interpolation. We have no way to project full 3D information with the FreeSurfer registration system, we can only project one value. We need to flatten the unconstrained maps before projection. This is documented in the tutorials:
https://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#Unconstrained_cortical_sources

At the very least this should be made more obvious, perhaps with a warning.

Like this?
Project unconstrained sources: Generate warning before flattening · brainstorm-tools/brainstorm3@805d200 · GitHub

So we don't exactly need 3D projection,

This is not exactly a 3D projection, it is a resampling on the 2D parametric space of the FreeSurfer sphere.

I'm not familiar with how Freesurfer does its normalization

https://neuroimage.usc.edu/brainstorm/Tutorials/CoregisterSubjects

I know it matches on spheres, but does it give a way to warp and unwarp arbitrary points on the surfaces?

No it does not.

If not, the easiest way is to project each template sphere vertex onto the subject sphere triangle it belongs to, and then interpolate that point's position on the subject cortex tessellation. I dealt with that (how to add points to a tessellation that represents a curved surface) a few months ago, but thought you'd have this in Bst already since you have a bunch of functions for remeshing surfaces.

This is not in Brainstorm. Topological reconstruction of surfaces is a nightmare, and I don't see how you could preserve normals as the template and subject surfaces do not match most of the time (and we need the subject-space normals for constrained sources).
I'm not sure how replacing the almost insignificant smoothing done in the current solution with an invasive topological rework of the cortical surfaces can improve the overall process.
But if you're convinced of the usefulness of such developments, you're welcome to work on it and/or discuss it at the next meeting.

Thanks François! Yes that warning is good.

Not exactly. You can project the 3 components independently, but it's true there would be some arbitrariness to this. It would make sense to project the normal component to the normal direction in the template, but for the 2 tangential components we'd have to come up with some additional rule. Still, for some operations, like in this case computing TF plot within subject, this would be ok. Not sure we should do this, but it's possible.

From the coregistration tutorial

which is then deformed to match the curvature map of the equivalent sphere in the FSAverage subject. The result of this registration is saved in the FreeSurfer output folder in surf/lh.sphere.reg and surf/rh.sphere.reg .

So in Bst we only have access to the deformed subject sphere vertices on the template sphere, and not the deformation information itself. That doesn't exactly specify how to go from one cortex to the other since the vertices don't match. I'll have to read more about FreeSurfer. I'd be surprised if there isn't more functionality that would allow us to warp and unwarp points.

We don't need reconstruction, we already have the triangles on the template. I have code to add vertices to a surface. I'll have to look at it, but I think it does interpolation of curvature between vertices or something like that. Once you have the surface, Bst already has code to compute the normals as you know.

The reason I would prefer this is not related to the interpolation smoothing, but rather that we can avoid interpolation of the source activity altogether. Yes we have to interpolate points on the surface instead, but I don't think it adds any more errors than what we already have from the surface generation and manipulation. I agree I don't have any evidence that it would make a big difference, but I'll add it to my list.