Batch process for DWI2DTI

Hello Brainstorm experts

I am a new user of Brainstorm, I am currently trying to make FEM mesh in order to do source analysis. As realistic conductivity tensors could be used from DWI, I wanted to do it for all my subjects. However, I found the DWI nii files were not recognized and imported when I import the BIDS dataset (my data already arranged and validated with BIDS) to Brainstorm. I had to manually do it for every subject, I did not find relevant information on the tutorial or your webpage, I wonder is there a way to run a batch process for all subjects with DWI2DTI process? Thank you very much, looking forward to hearing your suggestion.

Best,
Ken

I found the DWI nii files were not recognized and imported when I import the BIDS dataset
I had to manually do it for every subject, I did not find relevant information on the tutorial or your webpage

Indeed, these features are still under development, and not all documented yet.
The DWI import is partly document here:
https://neuroimage.usc.edu/brainstorm/FemTensors

I wonder is there a way to run a batch process for all subjects with DWI2DTI process?

You can select the process Import > Import anatomy > Import MRI and Import > Import anatomy > Convert DWT to DTI, and use the menu Generate .m script.
Integrate these calls in a loop over multiple subjects.

https://neuroimage.usc.edu/brainstorm/Tutorials/Scripting

Thank you for your reply. I have attempted the process you suggested and observed a missing argument error when using the generated .m script calling the subfunction Compute under process_dwi2dti.m.

This issue can be reproduced by:
Step1: Import > Import anatomy > Import MRI
Step2: Import > Import anatomy > Convert DWT to DTI
Step3: use the menu Generate .m script .
Step4: Run the Generate .m script .

This issue is not caught when manually click DWI2DTI from the GUI because the call go through

The generated script call:

Probably this would be fixed in your next build, may I ask if I just add [ ] for the T1BstFile argument in the Compute function just like the call from row 334, i presume would be the same right? because the T1 has already imported.

In addition, I have another side question on that, later on, I will try to perform head model on MRI volume which takes a long time per one subject. I was thinking to compute them parallelly in Matlab, for example, I made a script for running all subjects in a Parfor loop, each physical core runs one subject, would this be feasible? My main concern is that since the head model estimation in Brainstorm is depending on open-source software like DUNEuro for FEM, do you think it allows us to run in this way? Thanks, I am looking forward to hearing your expertise.

Probably this would be fixed in your next build, may I ask if I just add [ ] for the T1BstFile argument in the Compute function just like the call from row 334, i presume would be the same right?

Indeed. Thank you for reporting this bug.
I pushed this fix here: https://github.com/brainstorm-tools/brainstorm3/commit/bbce380362dc31a9cc510ab31b9e453b721e752b
Update Brainstorm to get it.

I was thinking to compute them parallelly in Matlab, for example, I made a script for running all subjects in a Parfor loop, each physical core runs one subject, would this be feasible? My main concern is that since the head model estimation in Brainstorm is depending on open-source software like DUNEuro for FEM, do you think it allows us to run in this way?

Running DUNEuro with a parfor loop in Brainstorm would not work, because bst_duneuro.m saves files in the temporary folder, and calling it twice at the same time would overwrite the same files.
Even if this were possible (using different users from the same computer, for instance), you would quickly run into memory limitations, as both the FEM mesh generation and the FEM leadfield computation are very memory consuming. Open a resource monitor to track the execution of a single instance.

In general, for MATLAB code, running in parallel is not always faster that running single instances sequentially, as many basic MATLAB functions for matrix processing are already multi-threaded, and far more optimized and less memory expansive than a plain parallel execution.