Forward projection matrix for deep cerebral sources?

I am trying to identify the forward matrix for a dataset using deep cerebral sources (the amygdala and hippocampus specifically). I exported the head model and used the bst_gain_orient commad in Matlab, as it instructed in the general head model tutorial. However, the number of vertices that come out of this step does not seem to match the number of vertices in my deep source model.

It doesn't match any combination of the cortex vertices, amygdala vertices, hippocampus vertices, and amygdala volume scouts. Is there some additional step that needs to be applied when creating the forward matrix for deep cerebral sources? Or is there some way to know what vertices it is trying to reference?

Thank you for your time and assistance.

The number of rows in the Gain matrix corresponds to a number of dipoles used in your head model.
It should be 3x the size of the GridLoc/GridOrient fields in the same file.

But it does not necessarily match any number of vertices in any anatomical file… It depends on the constraints applied to each region. If some regions are considered as volumes and others as surface, you won’t find any correspondence with the vertices from the surface file you used to define your source space.

In the source files, the correspondence between the ImageGridAmp field and the Gain matrix can be reconstructed using the GridAtlas field, or with the function bst_convert_indices.m (check the code for help).

ok, that helps a lot. Looking through the GridAtlas and bst_convert_indices, it seems the disparity arises from the amygdala which is a volume scout. From what I can tell, I can see the vertices on the anatomy are reduced down to the volumetric scouts, which makes sense since the amygdala is being treated as a volumetric source. So that addresses my main question.

I am trying to run simulations of networks which include deep brain structures. However, when I look at the Gain matrix using “Gain_constrained = bst_gain_orient(HeadModel.Gain, HeadModel.GridOrient);” The values associated with those volumetric scouts are all 0, which leads me to believe that I want to use the Headmodel.Gain (which if I understand correctly is the unconstrained gain matrix?) to do this. To convert a simulated signal from the surface constrained orientation to the unconstrained orientation, do I simply need to multiply the signal at each vertex by it’s associated Headmodel.GridOrient weightings? So each surface constrained vertex is multiplied by each of it’s 3 dimensional orientation vectors? Or is there a step in between I would need to apply?

I just want to verify I understand what is happening correctly.

Thank you for your assistance.

I'm not sure I follow what you are doing.
Yes you could decompose the activity of one "constrained" dipole (normal to the cortex) onto the three "unconstrained" dipoles at the same point in space (x,y,z). But this is not equivalent to compute the minimum norm solution with all the "unconstrained" dipoles.

Maybe you can find some inspiration in the process designed to simulate surface recordings from scout time series:

Francois