Hyperscanning: Interbrain phase coherence

Has anyone developed a pipeline for running inter-brain coherence (or connectivity) for analyzing hyperscanning EEG/MEG data? I know Brainstorm has tools for intra-brain coherence. Perhaps someone has modified this for inter-brain analysis?? If so, I'd love to hear about it and discuss. Thanks so much.

-Jared

Hi Jared,

In the EEG hyperscanning setup I could see previously, the recordings of the two subjects were saved simultaneously through the same amplifier in the same file, and therefore the coherence analysis at the sensor level was not more complicated that with a single subject.

If you have multiple files, this would require some more thinking but I would not expect any major problem. The main question is how to organize the files in the database. After that, the connectivity measures you would use between sensors or ROIs would be the same across or within a subject, wouldn't they?

Francois

Hi Francois,

"...the recordings of the two subjects were saved...in the same file"

This is indeed the case. I had been duplicating the files and then isolating the individual subject data for analysis. However, your comment presents an obvious solution for sensor level inter-brain calculations. In fact, it suggests to me that I could "trick" Brainstorm to calculate what I need by making a dummy file, and then copy pasting in MATLAB the actual values I want to use. I think I could even do this for source level envelope extractions... hmmmm.

Could we expand this idea and conceivably run coherence calculations for more than two brains? I recently did a five person hyperscanning study. I wonder... I will think about this more and try some things. I welcome your further comments, and those by anyone else on the forum that might be attempting similar aims with Brainstorm.

Cheers,

Jared

I'm not sure I followed what you are doing, or how this could be improved with what I wrote...
If you have specific questions about data organization, please post some screen captures illustrating your issue, showing the files in the database explorer and their contents.

Hi Francois,

Just for the sake of an example, let's say that I have a two subject hyperscanning EEG data file that contains 64 channels (32 from each subject). To process source estimation, one would process this file in duplicate, and block out the channels of the opposite subject. Epoching on an event of interest would be performed and any bad trials would be marked accordingly.

Once source kernals are calculated for each subject, let's say I extract scouts time series for the whole recording for all areas in the Mindboggle atlas. I think it makes 62 areas? That would mean 124 areas for both subjects combined.

Then, one could 'import into the database' a triplicate (or "dummy") of the original hyperscanning file, and manually modify the channel file to have 124 channels (one has to be careful to modify all relevant portions of the channel file, but it is not that difficult). One could then manually copy the scouts time series obtained from processing the first two files, and paste this into the 124ch modified triplicate file.

From Brainstorm's perspective, we would now have a 124 ch file full of sensor/electrode-level time series data. But in fact, it would be scouts time series data from two subjects. This data could then be epoched as appropriate, with bad trials marked based on our initial processing of subjects-separate data. Then, we could run Brainstorm's sensor level coherence analysis. But the results returned would actually be intra and inter-brain coherence for our extracted scouts.

Obviously, creation of the dummy file invites opportunity for human error, and thus would require extreme caution. But I think it is very interesting and worth doing, albeit a little cumbersome. Of course if there was a built in way to achieve what I described above, that would be highly desirable. However, I don't think anyone has developed it on the Brainstorm platform yet, right?

I have not yet actually attempted the above yet. It is still very conceptual and experimental. I'm very keen to hear your and anyone else's thoughts on this topic.

Cheers,

Jared

Hi Jared,

I thought you were expecting to do only sensor-level analysis, and for that case you didn't need any special database manipulation. But indeed, for source analysis you need to reorganize the data to separate the two subjects.

I think it might be easier to try to separate the two subjects very early in the process, keeping the two sets of electrodes in the same files will make the source analysis very complicated.

  1. Create a first subject with the option "Default channel file: No, use one channel file per acquisition run".
  2. Import 1s of recordings, rename the folder (eg. "Target")
  3. Edit the channel file:
    • right-click > File > Export to Matlab > "ChannelMat",
    • select the elements of ChannelMat.Channel corresponding to the first subject: ChannelMat.Channel = ChannelMat.Channel(iChanSubj1)
    • right-click > File Import from Matlab
  4. Edit the data file, proceed in the same way (import/export data structure):
    • DataMat.F = DataMat.F(iChanSubj1, :)
    • DataMat.ChannelFlag = DataMat.ChannelFlag (iChanSubj1)
  5. Import the full recording session (with the menu "Import MEG/EEG", all options disabled): it was important to rename the first folder imported so that this one gets imported in a different folder
  6. In Process1: select first the "target" 1s recordings, then the full recordings
  7. Run process "Standardize > Uniform list of channels" with option "Use the first channel file in the list". You get your continuous recordings for the first subject only in output.
  8. To be able to process it as a continuous file again, right-click on the full recordings > Review as raw (and if you need you can even re-export this file to an EDF+ if you need to process it with a different program)
  9. Repeat everything for the second subject, then run your analysis pipeline on both (pre-processing, source estimation)
  10. Generate the signals for your ROIs with the process "Extract > Scout times series", and use these "matrix" files to compute the inter- and intra-subject coherence (do not create extra channel files and put your scout signals into recordings file...)

Does it make sense?
Francois

Hi Francois,

Thank you so much for your reply.
Steps 1-9 are great advice, I will keep them in mind for the future.
Step 10 I actually reached a similar realization.
I had actually already source level Hilbert transforms on a scout atlas.
I concatenated the time-frequency file "signals"
Then I displayed the time series
Then I right clicked > snapshots > export to database to get the "matrix" file.
Works just fine for even five brains!
Thanks again, Francois.

Best,

Jared

Hi Francois,

Reviving this old thread. Previously, I learned that I can easily get interbrain coherence values by concatenating signals from two subjects into one matrix, and then running the NxN coherence script. However, I would prefer to do this step for each trial, and then get an overall coherence matrix for the experiment.

To achieve this for many experiment subject pairs, it would be handy to speedily concatonate the individual trial data from each subject using Process 2 and A B batch processing. However, there is no option to concatenate in Process 2.

Alternatively, I noticed that there is an A B coherence option for Process 2. However, it does not produce the desired NxN matrix.

I have opened the source codes for "process_concat_rows" and "bst_cohn". However, before I go down the rabbit hole here, I thought perhaps I would ask you, and see if you had an easy solution.

Ideally, I would like to concatenate in batch the identical trials from two subjects using A B processing in the Process 2 pipeline. Do you have any ideas for a quick modification to the existing "process_concat_rows" code to allow me to do that?

Merci beacoup. And best wishes during these difficult times.

Jared

Could you explain better why it doesn't give you what you need?
A few screen captures showing the files in your database, the files selected in Process2 and the options of the AxB process you select could help me understand better what you are trying to do.

Ideally, I would like to concatenate in batch the identical trials from two subjects using A B processing in the Process 2 pipeline. Do you have any ideas for a quick modification to the existing "process_concat_rows" code to allow me to do that?

It won't be easy to edit the process functions to make it a process with two inputs.
You would go much faster if you were simply writing a script that selects the files from the database in the correct order and call in a loop process_concat_rows.

Read the introduction tutorial about scripting, it would explain how you could get started in this direction. The get back to us if you're stuck somewhere.
https://neuroimage.usc.edu/brainstorm/Tutorials/Scripting

Hello Francois,

Thank you so much for your reply. Your response to my claim that the A B coherence option for Process 2 does not produce the desired NxN matrix has made me realize that I may have jumped to conclusions.

I have parcellated source activity into 62 scouts (based on Mindboggle). I want to assess inter-brain coherence. So I would like to know the coherence from A1 to B1, A1 to B2, A1 to B3... A2 to B1, A2 to B2.... and so on.

Is that what is represented in the coherence matrix for the A B coherence option in Process 2? If yes, then I don't need to do any scripting after all, and I am silly.

Best,

Jared

It depends how the data is organized in your files, but with the the "Coherence NxN" or "Coherence AxB" you can maybe get what you want.

Both A and B files are extracted scouts time series. So 62 scouts on Y axis, and time on X. I confused myself by looking at the graphical output which doesn't list 124 scouts. The coherence matrix seems correct actually. So, really, I am silly, and you and the Brainstorm team are awesome.