Exporting protocol from BST and analyse in mne

Dear All,

If I have changed something in the protocol in BST and now I want to use the protocol in MNE,
what I should do?
in my case I modified the channel,

Bests,

There is no easy solution yet for exporting data to MNE-Python.

The functions bst_mne_init.m, out_mne_data.m and out_mne_channel.m could let you create Python objects on which you should be able to call MNE-Python calls from MATLAB. Read the code of the functions for help.
You need to set the path to the Python environment in the Brainstorm preferences first.

Example:

bst_mne_init('Initialize', 1);
mneObj = out_mne_data('/path/to/protocol/Subject01/folder/data_example.mat', 'Raw');
mneObj.plot_sensors();

Thank you Francois,

Do you know a way to generate simulated data and save it in fif format?

We don't have any easy way to save .fif files from Brainstorm.
You should be able to do it with the mne-matlab functions, but I'm not sure how.

Exporting to a MNE-Python object in Matlab/Brainstorm, and then using the MNE-Python save functionality could be an easier way to explore.
Please let us know how it goes, and feel free to contribute to the Brainstorm code base any function you think might be relevant for other users.

Hi Francois

Ok I will try to find a solution,
Just a question, how can I load the below fif from mne in BST? so that I will have channel, subject in BST
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'

subjects_dir = data_path + '/subjects'

Cheers
Abdallah

For this, you have the online tutorials that explain everything in details.
Start by following the introduction tutorials (section "get started") at least until #5 included, using the example dataset provided.
https://neuroimage.usc.edu/brainstorm/Tutorials

Then if you need additional information about Elekta recordings, you can follow one of the advanced Elekta-based tutorials in the section "Other analysis scenarios":
https://neuroimage.usc.edu/brainstorm/Tutorials/TutMindNeuromag
https://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta

I imported the subject C:\Users\abd\mne_data\MNE-sample-data\subjects\sample

Click here to compute MNI transformation I click this. is this enough?
(the number of source in mne was 7498*3=22,494) I put the number of vertices=7498 but it gives:cortex_7500V

Then review raw file: sample_audvis_raw.fif

I computed the leadfield, but I aw worried about the 7500, I need it to be 7498 as mne, because then I will compute the sensitivity map with function exist in mne with the same subject

Click here to compute MNI transformation I click this. is this enough?

Enough for what?

I computed the leadfield, but I aw worried about the 7500, I need it to be 7498 as mne, because then I will compute the sensitivity map with function exist in mne with the same subject

You won't be able to obtain the same surfaces with Brainstorm and MNE.
Brainstorm downsamples the FreeSurfer cortex surfaces using MATLAB's function reducepatch. MNE-Python does it in a different way. Even with the same number of vertices, they wouldn't match and wouldn't be in the same order.

Either you do your source analysis in Brainstorm, or you do in MNE-Python. If you want to compare the results after, you need to project the sources from one surface to the other:
https://neuroimage.usc.edu/brainstorm/Tutorials/CoregisterSubjects

Thank you!is there a way to compute the sensitivity analysis as the one done in mne but with brainstorm?

Not yet, but we would like to develop this at some point.
Contributions are always welcome: If you would like to work on this, I could help for the final integration.

with Bst I have no problem to modify the channel,
but with mne it seems tricky,
I have read the raw fif file using functions in fieldtrip and it is difficult for me to understand how they create the coordinates and orientation by using coil.def.dat, ...translation,...etc..
I think also in BST you used a function like that, when we load the raw file, this the first time I discovered that you created the sensor position and orientation upon we load the raw.fif..and it is not exist (the channel and orientation) in raw.fif directly...may be this is another story,

Yes I am happy to contribute,, but it will be my first contribution in my life, I am even do not have github, but this is good experience for me to understand how can I collaborate in github