How to import other brain atlases to use as scouts

After you import the altas, rename it to “Yeo” for each hemisphere.
Only atlases with the same names will be merged together.

1 Like

OK, that makes sense.
I discovered during this process that the “Import Freesurfer folder” does a little cleanup on the pial surfaces, resulting in a slightly reduced number of vertices. THis was throwing me off when I initially tried to import the Yeo atlases since they no longer matched. Thus I believe that if I want to use the local Yeo atlases I need to go back and manually import the MRIs and pial surfaces again, if I want to have the two built-in FS atlases as well as the Yeo atlas for my subjects. Does this sound right?

Thanks,
-Jeff

Yes, if you want to add custom atlases, you cannot use the menu “Import Freesurfer folder”, you have to import everything manually.
How do you get the Yeo atlas for the individual subjects?

A research assistant in our lab did it about 2 years ago using Freesurfer tools. I think you know the process: align each individual to the generic Yeo atlas on the sphere, then map to the individual cortex inflated to a sphere, then map back to the folder 3D cortex, then to the voxels if you need that as well. I hope to learn how to do this soon as I will have new subjects I will need to perform it on.

Another question comes to mind. Now that I have manually imported the Yeo atlas as well as Destrieux, Desikan-Killiany, Brodmann, and two others we have here, I want to subparcellate to use as a scout. But I have to do that for each individual, and they will not mecessarily match. What I really probably need to do is subparcellate in the general Yeo or other atlas space. So I presume I have to do that first in Freesurfer, using the FS parcellation tools, then create an individual subparcellation “labeling” before importing each individual into BST.

Indeed, the subdivisions of the atlas could be different for each subject if you do them in Brainstorm.
Your approach sounds correct.

At one time you told me you were planning to incorporate the Yeo atlas in BST? What is the status now?
-Jeff

The Yeo atlas distributed by the FreeSurfer developers has been available on the FSAverage brain for a while:
http://neuroimage.usc.edu/brainstorm/Tutorials/LabelFreeSurfer#FSAverage_template

FreeSurfer 5.3 does not generate individual versions of this Yeo atlas, so it’s not available on any other brain.
You can ask them if they are planning to integrate this atlas in the default FreeSurfer segmentation at some point.

OK, I will contact the FreeSurfer people on this.

Dear Francois,

I have a follow-up question on that. When we run freesurfer for segmentation, we do not indicate what number of vertices we should end up with, at least I run mines regarding how it was explained on brainstorm tutorial https://neuroimage.usc.edu/brainstorm/Tutorials/LabelFreeSurfer#FSAverage_template

Now I try to apply Schafer or Yeo's atlases onto my lh.pial as you describe here and I got message saying number of vertices are not matching! So question here is, you know we can define the number of vertices when we upload the freesurfer anatomy folder to brainstorm, would this make any effect on the number of vertices of already segmented surface files or does it only related to source reconstruction related definitions, I mean saying to Brainstorm how many of those vertices I want a dipole to be fitted? Sorry for asking this questions I really do not know how those things works in general really.
So do you think would it help us if I would identify the number of vertices let say as 10242 when I upload a freesurfer segmentation folder to brainstorm, so it changes the number of vertices on segmented file hence later on I can use fsaverage5 atlas? If not is there any other way of downsampling the number of vertices of our surface files to the one required for the atlas we would like to use? I would really appreciate if you could help me with that please.

Ps. I would have tried myself and see obviously but somehow I cannot upload freesurfer files to brainstorm, alhough I did before, which gives me very weird error which I do not understand the reason.
Thank you,

Isil

Now I try to apply Schafer or Yeo's atlases onto my lh.pial as you describe here and I got message saying number of vertices are not matching!

How are you proceeding?

we can define the number of vertices when we upload the freesurfer anatomy folder to brainstorm, would this make any effect on the number of vertices of already segmented surface files or does it only related to source reconstruction related definitions, I mean saying to Brainstorm how many of those vertices I want a dipole to be fitted?

Brainstorm imports the high-resolution surfaces generated by FreeSurfer, loads the atlases on top of them, and then downsamples the surfaces with Matlab's reducepatch function to get a reasonable number of dipoles for source reconstruction. The import procedure is detailed here, and these are the instructions you should refer to to load any additional atlas:
https://neuroimage.usc.edu/brainstorm/Tutorials/LabelFreeSurfer#Manual_import_of_the_anatomy

If not is there any other way of downsampling the number of vertices of our surface files to the one required for the atlas we would like to use? I would really appreciate if you could help me with that please.

You won't be able to do anything like this. You have to start with full resolution surfaces, import all the atlases, and then downsample.

The Schaefer 2018 atlases are available through the CAT12 segmentation:
https://neuroimage.usc.edu/brainstorm/Tutorials/SegCAT12#Cortical_parcellations
Hopefully you'll manage to make this work, it would be a lot easier for you to get these atlases for you subjects.

Hi Francois,

thank you very much for your reply. I upload the atlas label file as you described above in one of your replies to Jeff, or as it was described here https://neuroimage.usc.edu/brainstorm/Tutorials/LabelFreeSurfer#Manual_import_of_the_anatomy.

Apparently this application works with the annot/label files produced by Freesurfer during MR segmentation however I cannot make it work with any other atlas file outside the freesurfer segmentation folder. I attached the error message that I get if I try to upload that or any other Yeo atlas (the number of the mismatched vertices change depends on the atlas that I am trying to upload of course).

So my confusion, even though we didn't indicate the number of vertices during segmentation in freesurfer how my segmentation ended up with 134570 vertices hence it does not match the vertices of the atlas that I am trying to upload. And if I use CAT12 to run all the segmentation from the very begining, and choose 15000 vertices for the segmentation, so how come later on this segmentation could work with any Yeo atlas, whose vertices are different from 15000 vertices already but it fits somehow, but I cannot make it work manually. So does this mean, during segmentation there is a separage lh.pial files are produced for each of the default atlases anyways so they match with the annot file somehow, and since I am lacking this in my freesurfer segmentation hence I cannot upload any Yeo or Schafer atlas at all? I would really appreciate if you could give a bit more details about that please indeed. Because if I already have segmentations files are ready for all the subjects obtained with freesurfer and you know it takes ages to complete for a whole set of participants, and it would be really helpful if I can find a way to upload Yeo or Schafer atlas onto those segmentations.

Thank you very much in advance,

Kind regards

Isil

No, indeed.
You need to find a way to project your atlases on the individual brains first, and then import them into Brainstorm. I don't know how to do that, but there are probably ways with FreeSurfer.

I've just been told by Chris (Markiewicz) freesurfer's mri_surf2surf function might be doing it but haven't had a chance to have a look. Once I get more info and practices I will report here but for now several links about the function itself is here

https://surfer.nmr.mgh.harvard.edu/fswiki/mri_surf2surf

And I guess Jeff was asking the same question in freesurfer forum so here is the answer he got regarding mri_surf2surf function
https://www.mail-archive.com/freesurfer@nmr.mgh.harvard.edu/msg34517.html

I will let you guys know if I can get any improvements but it might take time.

Thank you

Isil Bilgin

Thanks for reporting all your findings here, this will definitely help other people.

Hopefully you'll manage to have CAT12 working, it would be an easier solution.

Hi Francois,

I feel I am missing a step.

I do the following:

new sub: test01
import MRI
import surf: /surf/lh.pial
display lh.pial
Scout:Load Atlas:Load Atlas /label\lh_Yeo.annot
close and clear

This works great.
Now I have the Yeo atlas on the lh.pial surface.
And it looks good.

I then need to downsample the lh.pial to 7500 vertices.

Right click lh.pial => less vertices

Once I have downsampled lh.pial the atlas no longer appears to fit.
Its there on the surface but it looks like a terrible fit.

Am I missing something?
Do I need to also downsample the atlas?
How do I do this in Brainstorm?

Thank you.

-Tom.

The support for importing automatically the Yeo cortical parcellation was added to the FreeSurfer import function. If you have files named 'lh.yeo2011_7networks_n1000' or 'rh.yeo2011_7networks_n1000', they should be added automatically as an atlas labelled "Yeo 7 Networks".
Same for the 17 Networks.
https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/io/import_label.m#L696-L699

Instructions to compute these files can be found here:
https://neuroimage.usc.edu/brainstorm/Tutorials/LabelFreeSurfer#Cortical_parcellations

However, I recommend you use the Schaefer atlas from CAT12 instead (the updated version :
https://neuroimage.usc.edu/brainstorm/Tutorials/SegCAT12#Cortical_parcellations

Thanks Francois.

Briefly, the problem we are having is this.

We are using Desikan with 68 regions but we only have 62 electrodes.

This means after source-localization some of the 68 are not independent time series.

In order to perform leakage-correction (orthogonalization, all at once, not pairwise), we need to have 68 independent time series.

Hence, we presumed we needed to use an atlas with fewer regions.

The Schaefer seems to have many regions.

The Yeo 17 has 17 networks. This means (i think) that network 1, for example, will have multiple regions spread across the cortex but only provide a single time series for that network.

This is not quite optimal for our purposes.

For the above reasons, I have imported a version of the Yeo with 29 regions per hemisphere (composed of the 7 networks).

You can see the 29 individual regions in the picture above.

I trust this comports with your own understanding?

I was just a bit concerned that the downsampled atlas has somewhat of a patchy appearance.

But maybe this is normal?

Apologies for the long explanation.

Thanks Francoise.

We were basing our assumptions on this:

Paraphrasing:

The issue you are now facing relates to earlier warnings from @GilesColclough; the atlas has too many regions given the linear rank of the underlying data. The reason that the data has "low" rank in the first place (keeping in mind that 100+ dimensions is not actually that low!) is due to the ill-posedness of source-reconstruction in MEG. Basically: we are trying to estimate 3k+ signals (one for each voxel) from the correlated measurements of only ~250 sensors, and as the error message says, this gives us only 114 linearly independent signals to work with --- but the AAL parcellation assumes the existence of 115 independent regions.

There are essentially two ways forwards:

  • If the AAL parcellation is not a must for your analysis, then consider using a parcellation with fewer regions (e.g. Desikan-Killiany);
  • Otherwise, I would first remove all subcortical regions from the parcellation (the SNR in these regions is extremely low compared to cortical regions), and perhaps merge small regions together, or even with larger neighbours. For example, find parcels with very small label counts after the resampling, and either merge neighbours with small counts together, or else merge small regions with their smallest neighbour.

To clarify, we get 68 regional timecourses (Desikan atlas) and perform orthogonalizatoin (Colclough et al. 2015). And the algorithm fails with the message:

The ROI time-course matrix is not full rank.
This prevents you from using an all-to-all orthogonalisation method.
Your data have rank 60, and you are looking at 68 ROIs.
You could try reducing the number of ROIs, or using an alternative orthogonalisation method.

We search the 68 time series for those that are linearly independent.
We keep those and remove the rest.
We now have ~62 time series (depending on the subject).
We run the orthogonalization alogithm again.
This time it is successful.

So i'm not sure I understand.

Not all the timeseries seem to be contaminated with each other because we can remove them and have the algorithm work.

We therefore assume that reducing the number of regions of the atlas will ameliorate this issue.

Please help me understand why I am wrong.

PS. Maybe you mean we should use a different solution to minimum norm?

First, I'd like to apologize for my previous inaccurate reply: the rank of the scouts times series matrix is indeed linked with the number of sensors.

If you have Neeg=62 electrodes, the rank of your recordings is at most 62, or 61 if the data is in average reference and the recording reference is included. Additionally, the rank decreases for each SSP or ICA component that you remove during the signal cleaning.
Each source signal is a linear mix of the Neeg EEG signals. If you consider a set of more than 62 sources, the matrix [Nsources x Ntime] may have a rank up to Neeg. The scouts time series are averages of source time series, and would therefore share similar properties. When computing the signals associated with more than 62 scouts, the resulting matrix [Nscout x Ntime] may have a rank up to Neeg.

Limiting the number of scouts to 58 scouts would make it possible to obtain a "scouts matrix" [Nscouts x Ntime] that has a full rank. However, I'm not sure I understand this convoluted objective of computing scout signals and then orthogonalizing them:

  • The orthogonalized signals would not have any correspondence with the underlying anatomical ROIs.
  • Wouldn't you obtain similar signals as when processing the EEG signals directly? (minus the information that is lost during the inverse model computation - see regularization)

Working with volume sources would bring you closer to the pipeline you refer to, but then you would face a problem of having sources with unconstrained orientations (3 signals per ROI).
On the surface, the selection of the type of parcellation to be used is not trivial, and I don't know how to guide you with this. One recommendation I can give you is not to try using a volume parcellation (eg. AAL) for processing sources estimated on a surface.

As a software engineer, I am not competent to address these questions. I would recommend you seek advice from signal processing specialists.
@John_Mosher @Sylvain Can you please share your recommendations?

I was just a bit concerned that the downsampled atlas has somewhat of a patchy appearance.

Only vertices are classified as part of scouts, not faces. The faces between two ROIs are not attributed to one or the other and therefore not painted. Display the surface edges and zoom in to observe this.

Other threads discussing rank of source space signals:

François provides useful advice here.
In addition, if you truly want to work with as many regions as dimensions in your scalp signals, you can decide to merge some of the atlas regions together. This is a feature available in the Scout panel: select two or more scout regions from the list and select 'Merge' from the panel pulldown menu. Repeat until you reach the targeted number of regions.