HCP-MEG tmegpreproc

Hi Everyone,

I am going to start working on the HCP-MEG datasets and was wondering if it is okay to start with the tmegpreproc files instead of starting from the raw instance. Would this cause any foreseeable issues with the timing of events or the ability to localize activity? I do not need to work in source space, sensor space is okay too, but I know there are issues with this dataset's sensor locations, so please also let me know if one space will be easier to work in. I am coming over from MNE because I cannot get the sensor locations corrected in there without distorting the activity, making location dependent functions like interpolation impossible.

Best,
Matt

There is a tutorial dedicated to importing unprocessed MEG data:
https://neuroimage.usc.edu/brainstorm/Tutorials/HCP-MEG

We did not document how to work with the preprocessed MEG data, but other users on the forum already did it, you may ask these users if they had any issue:

Thanks Francois for those links. I was able to go through them and apply what I learned to the tmegpreproc files. I compared the output from the raw pipeline to the output of applying the source estimation step and on in that HCP-MEG tutorial to the tmegpreproc data and can confirm the brain maps match up as expected. Additionally, I can see how the fiducials, head model, and helmet are properly aligned in for the tmegpreproc data so long as you start from the source estimation step. Thanks for the help on that.

Now I need to export the source localized data but am having two problems. 1) I imported the data as a fif file since I had previously further cleaned the data in MNE-python. This seems to have caused me to lose the markers that differentiate a 0-back epoch from a 2-back epoch in the working memory dataset. Is this info recoverable, or is there a different way I need to import the data from the start?
2) exporting epochs seems to give me two options, either each epoch is saves as its own edf file (or other file type of my choosing), or the hole structures is saves as a zip file in a way that is only readable by brainstorm. Is there a different way to export the data that would have all the epochs in a single file that is readable by MNE or eeglab?

Thanks again for your help,
Matt

This is difficult to say without having any example. If you don't see all the information you expected to get for each event, I guess it was lost during the import from the FIF file.
It might be possible to read again the missing information from the STIM channels:
https://neuroimage.usc.edu/brainstorm/Tutorials/VisualSingle#Read_stimulus_triggers
https://neuroimage.usc.edu/brainstorm/Tutorials/TutMindNeuromag#Access_the_recordings

Alternatively, you can maybe find a way to export the markers to a text file from MNE-Python and then import the information in Brainstorm.

each epoch is saves as its own edf file

Avoid exporting files processed in Brainstorm to EDF.
In Brainstorm, the values are saved in the floating point double precision (64 bits), while in EDF, they must be converted back to 16bit integers. This loss of precision can cause problems.

Is there a different way to export the data that would have all the epochs in a single file that is readable by MNE or eeglab?

No solution readily available in Brainstorm, but this is something that you might able to script this yourself.
To read all the epochs in one matrix, see the last example process from this this tutorial:
https://neuroimage.usc.edu/brainstorm/Tutorials/TutUserProcess#Examples-1

You might be able to save this in the FieldTrip data format, see example from the function out_fieldtrip_data.m