Subject coordinate system after "refine with head points"

I was a little confused about the subject coordinate system after using "refine with head points". See the attached screenshot. It appears the SCS axes stay aligned with the anatomy (MRI) whereas I expected them to follow the head points, such that for example the X axis would still go through the nasion point from the digitization, and not the nasion picked on the MRI. I thought the advantage of digitizing the anatomical locations was that it would be more precise than manually picking them on the MRI. But it seems Brainstorm is designed with the opposite viewpoint, keeping the MRI fiducials and ignoring the digitized anatomical locations after refining the co-registration. I don't have a specific question, but I'm curious to hear your thoughts or comments about this choice.
Screenshot-MEG-3D%3A%20sub-0229-1

The spatial reference on which everything is aligned is the SCS referential based on the MRI anatomical landmarks. These points are defined once at the level of the anatomy and may not change across multiple experiments.
When importing MEG/EEG/NIRS sensor positions, these are converted to the referential defined by these anatomical landmarks: a rigid transformation is applied to the positions of all the points residing in the sensor space (electrodes, MEG coils, optodes, head localization coils, other headpoints). Anatomical landmarks always remain unchanged.

As suggested, I'm copying another related email thread here.

Hi Mark,

The function channel_detect_type has an isAlign option, but it uses the digitized anatomical points from the channel file to do the alignment, not those from the MRI. This seems to conflict with what is done elsewhere. (As I had observed a few years back.)

And my reply to this comment would be exactly the same as three years ago.

When warping the default anatomy to head points, from what I can tell the coordinate system is not changed. bst_warp has a section for warping fiducials, but the 3 points that define SCS are not warped (commented out). So the NAS,LPA,RPA points no longer match the now warped anatomical points. I agree that for a blurry template that's not very important, and it saves us having to update everything. But it does make the error greater if some transformations are done with digitized points as above and others with MRI points.

I'm not sure I understand this point: After warping, you're not supposed to adjust the MRI/sensors registration anymore.
If you think there is anything wrong in the current warping workflow, could you please assemble a clear example I can reproduce on my end?
Please post your messages in the old thread you mentioned before, so we these conversations are archived in a structured way.

Finally, for head tracking, it uses both warping to get a scalp surface, and channel_detect_type to align. So I think I have to fix that, but I wanted to double check with you first if channel_detect_type should also be changed.

The head tracking code may have issues related with a wrong sequence of registration calls. You may need to debug this, indeed.
The function channel_detect_type is used intensively by all Brainstorm users. Changes to it would be complicated to test, and may alter hundreds of existing processing pipelines. I would prefer avoiding any modification to it unless there is something really wrong with it.

The head tracking is used at most by a handful of users, including you.
If anything needs to be modified for fixing the head tracking, I'd suggest it is done on the head tracking side only, even if it's a bit hacky.

Does it make sense?
Francois

Hi Fran├žois,

Please have a closer look at my email, especially the first point. I think there is something wrong with channel_detect_type using digitized anatomical points. Your previous answer to this forum thread basically just confirmed that Brainstorm uses MRI anatomical points to define the SCS coordinates. So the MRI ones should be used everywhere and the digitized ones not used at all since they are different and basically "arbitrary" (but following a convention). Here I'm pointing out an inconsistency where the sensors and anatomy would be using coordinate systems based on different points.

The point about warping is that since the MRI NAS etc are not warped, they will definitely not match the digitized anatomical points. So if the latter are used to align sensor coordinates (as in channel_detect_type), then the misalignment is greater. This would be fixed if everything uses the MRI points.

My point about head tracking is secondary, I just want to resolve this first and then I can adjust it accordingly.

Thanks!
Marc

Of course, they have to.

  1. In the MRI, we mark manually the points on the reference MRI volume using the MRI viewer (or using some automatic processing).
  2. In the sensor space, we get the points from a 3D digitizer or from the MEG (localization of the head tracking coils) - With the specific sub-case where we do both, and correct the MEG sensors based on the detected coils to use digitized anatomical landmarks
  3. The two things are done independently, and the combined result of the two procedures is our coregistration sensors/MRI. Using the MRI landmarks in the sensor space (eg. in channel_detect_type) is not possible, as we initially don't know how to transfer MRI coordinates in the same space as the sensors.

I'm really sorry if I'm still missing your point. It looks like there is something obviously wrong that you're telling me to look at, but I can't see it, I'm probably not looking at the place or along the right angle. Maybe having the full workflow laid out would help me understand.

Can you please assemble an full example illustrating your point?

  • starting from the import of the MRI and the import of the first MEG run of the tutorial dataset,
  • following the steps of the introduction tutorial,
  • illustrating step-by-step what happens with the various coordinate systems (R and T matrices) and fiducials coordinates (in the MRI and in the channel file).
  • when the debugger hits channel_detect_type.m: explain why the code is wrong, where exactly, and what would you replace it with

The point about warping is that since the MRI NAS etc are not warped, they will definitely not match the digitized anatomical points. So if the latter are used to align sensor coordinates (as in channel_detect_type), then the misalignment is greater.

Is your issue specific about importing new MEG data AFTER warping the anatomy?
This is indeed not the proposed workflow, which is to 1) import all the MEG data, 2) warp the anatomy, 3) never modify the registration of the sensors again.

Could you please write a full sequence of operations that illustrates this issue, based again on the example dataset (maybe importing the second MEG run after warping the anatomy with the first one), so that we can clearly identify at which stage the issue is? And what action we need to take?
I guess it could take two directions: when importing data in a subject with a warped anatomy, we could either call channel_detect_type.m in a different way (or completely skip it and do something else), or giving a warning to the user saying that all the data should be imported BEFORE warping the anatomy.

This would be fixed if everything uses the MRI points.

How do you use the MRI points to align the sensors?
That would result in computing the SCS transformation (R+T) based on the results of the previously computed SCS transformation, without which we can't transfer the MRI fiducials into the space of the sensors...

Thanks for your patience.

Not at all, than you Fran├žois for helping me understand.

I have thought about it more in detail like you suggested and will try a few examples later. I think you're right that the usual workflows are ok.

  1. MEG sensors are aligned with head points through the 3 digitized head coils.
  2. These are all (temporarily) converted to a coordinate system based on the digitized anatomical landmarks.
  3. Anatomy files (MRI, surfaces, etc) are converted to a coordinate system based on anatomical landmarks placed on the MRI.

At this point, there is a mismatch, based on how well the 2 sets of landmarks (digitized vs MRI) match. (This is one thing that was wrongly concerning me, because I had in mind that the "refine" or warping process had already happened. In practice, we wouldn't have to go to the digitized anatomical coordinates if we're going to use the "refine" process.)

  1. Then, the head points are aligned with the MRI anatomy with the "refine" process. This converts sensors and head points to the MRI-based coordinate system and everything is ok.

or

  1. Warping a template. It was not as clear how this works, but from the tutorial, I think the logic is that we first do manual registration where we match the 2 coordinate systems at step 3. Then warping displaces as well as deforms the MRI to match the head points, so that everything then really ends up in the digitized anatomical fiducials coordinate system. So while both sets of fiducials won't be exactly the same, they should both define the same coordinate system axes.

If this is all correct, I can now return to the head tracking code with a better understanding.

Thanks!
Marc

Then, the head points are aligned with the MRI anatomy with the "refine" process. This converts sensors and head points to the MRI-based coordinate system and everything is ok.

I'm not sure this is the right way to describe what is happening. This "refine registration" algorithm adjusts a bit the registration based on the two sets of NAS/LPA/RPA points (one from the MRI, one from the digitized or MEG), at it does so using the head surfaces computed from the MRI + the digitized head points.
But you can't really say that before the sensors are in a "coordinate system based on the digitized anatomical landmarks" and after in "a MRI-based coordinate system". Both before and after, the alignment is the result of matching a set of points coming from the MRI and a set of points coming from the digitizer or the MEG.

Then warping displaces as well as deforms the MRI to match the head points, so that everything then really ends up in the digitized anatomical fiducials coordinate system.

This sounds ok.

So while both sets of fiducials won't be exactly the same, they should both define the same coordinate system axes.

But maybe there is something wrong with importing more CTF MEG recordings with a .pos file then warping?
Please let me know if you think there is anything that should be studied more in depth here.

Thanks Francois, I think we're in agreement conceptually, if not on how to describe it.

Before "refine", we have two independent sets of data (MRI and MEG/digitized) with coordinates each based on their own set of fiducial points. We're "matching" them simply by assuming that the digitized and MRI landmarks are the same. (This is different than if we did the same kind of point alignment with the 3 landmarks as with the head points for example.) So it is accurate at that point to say the sensor positions are in a coordinate system defined by the digitized landmarks. Their coordinate values don't depend at all on the MRI side.

Once we refine (ignoring how the alignment is done), they're converted to the "MRI-fiducial-based coordinate system". My point was that the MEG/digitized points coordinates are changed by the "refine" process, and not the MRI coordinates.

No, I think it's all good. Again I think if there's anything odd, it's in the real-time code, which I'll get back to shortly.