fNIRS 3D reconstruction

Dear Nirstorm community,

we are currently working on a NIRS dataset on natural speech perception, and want to use the 3D reconstruction function to map our channel data on the brain surface. We encountered several issues, and would be interested in your thoughts/comments.

First of all, some technical details: We use a NIRx Nirsport 2 sytem with 16x16 optodes, as well as 8 short-channels. See below:

Data was preprocessed in Homer, and we exported the cleaned and filterered data as SNIRF file to brainstorm - that worked fine :slight_smile:

We got the 3D reconstruction working in an example subject, but still have several questions on the overall procedure:

  1. We identified bad channels in each dataset using qtnirs in our preprocessing script. Should we just mark these channels (as well as the short-channels perhaps) as bad channels in brainstorm to exclude them from the 3D reconstruction? Or should we remove them from our channel list entirely?

  2. When computing the fluences model, we found that our wavelengths (760 nm and 850 nm) are not specified in the tissue_property.json file. For now, we just manipulated the entries to fit our wavelengths (as recommended here: Fluences for 762nm (Artinis Brite fNIRS)). However, it would be great to use the real values. Is there any way to compute those? Or can you perhaps update your json file to work with our NIRx system?

  3. The default setting for the fluences model is 10 million photons. Is this a good setting or should adjust it for real data analysis?

  4. For the 3D reconstruction, we provided brainstorm with the Differential Optical Density (dod) data from Homer3. The dod data is then converted automatically to hemoglobin concentrations during the source reconstruction (we used wMNE for now). However, we cannot set DPF method or subject age here, which we normally do. What method/settings are used for this conversion?
    Is there a way to apply the source reconstruction on HbO/HbR time courses directly (imported from Homer3)?

Here is an example output of our wMNE model:

Thanks a lot for your help!

Best wishes from Germany!
Sebastian

Ping @edelaire

1 Like

Hello Sebastian, I recommend looking at the NIRSTORM plugin and its tutorials, it might be helpful. I am not sure how to help with 2, but I hope this can help:

  1. Marking your bad channels should be sufficient. Once you've performed short channel regression, you can also mark the short channels as bad channels.
  2. 10 million photons is enough to get a good estimate of where your activity is occuring, but for more detailed activity you may want to use 100 million instead.
  3. NIRSTORM's dOD to [HbO] and [HbR] allows you to pick from 2 different DPF methods (Scholkmann, 2013 or Duncan 1996), which can then be used for cortex reconstruction. [https://neuroimage.usc.edu/brainstorm/Tutorials/NIRSTORM](The tutorial for reconstruction can be found here.)
1 Like

Thanks a lot for the help and sorry for the late reply - I got caught up in something else and only returned to my Brainstorm issue today.

I fear that I still do not understand how I can inform the dOD to HbO conversion in the source model. My steps are

  1. Computing the fluency model for my montage
  2. Compute head model from fluences
  3. Source reconstruction - wMNE

The whole process only works with dOD data. If I want to compute sources with HbO data, I receive the following error:

However, when I perform the source reconstruction with my dOD data (as intended), Nirstorm automatically computes the HbO conversion, and there is no option to enter any subject info (DPF methods, age etc).

I see that there are also sources for both wavelengths. However, as far as I understand, I cannot convert those to Hb values in brainstorm (using Nirstorms DPF methods), bacause the conversion does not work with Sources.

Is there anything else I can do?

Best wishes
Sebastian

Hello,

You dont need to use DPF when computing the concentration along the cortex. The main reason why we use the modified beer lambert law, at the channel level, including some additional factor such as the DPF is to account for the fact that we don't exactly know the path of the light from the source to the detector.

This is taken care of when you solve the inverse problem. Once you are on the cortex. then there is no longer a need for this approximation. (all the DPF, age factor are contained in the generation of the head model)

Ah, I see. That makes sense. So, when using a brain template instead of the individual anatomy, a the same DPF is applied to all subjects automatically, correct?

Another question: As pointed out in my original post, I had to replace the wavelengths in tissues_property.json as there are not values for 760 and 850 nm. I now used the values for 690 and 830 nm. Is this still ok?

hello,

yes, that is correct.

it's s not perfect as we would like to use the tissue property for the wavelentgh of the system. but that is the best we can do.

When i have time, i will try to find table with more wavelength reported that what we currently have

This sounds good! Thanks a lot for your help!

Hello, thank you for your helpful feedback.

I was able to find some literature that documents values for wavelengths 760 nm and 850 nm in human brain tissues. I hope these sources are helpful, and if you recommend the appropriate tissue properties we should than use for this wavelength?

Thank you again for all your help.
Distribution of tissue optical properties in the human head .pdf (187.1 KB)

-Mica