Synchronized visualization and analysis of video and electrophysiology

Dear Brainstorm Community,

I've been looking for a tool for browsing and manipulating concurrently recorded electrophysiology and video data. It seems that Brainstorm can do just that, at least that's what I'm taking from an exciting paper by Nasiotis et al. ( Nasiotis, K., Cousineau, M., Tadel, F. et al. Integrated open-source software for multiscale electrophysiology. Sci Data 6, 231 (2019). https://doi.org/10.1038/s41597-019-0242-z).

I have just started playing with Brainstorm and I could load my elphys data, however, I'd need advice on how to add the videos to the timeline. Could you please direct me to some tutorial or manual to get started?

Kind regards,
Peter

Welcome!

The video features were developed for specific SEEG+Video users in a clinical context:
https://neuroimage.usc.edu/brainstorm/Tutorials/Epileptogenicity#Video-EEG

It might not be adapted for what you need to do, or your synchronization mechanisms, but we could try working on this together.

Thank you!
Essentially, we would like to browse elphys and video data concurrently for behavioral analysis. In our case, we would be interested in tools facilitating the post-hoc synchronization of asynchronously recorded data streams, including videos. I think this is an issue many labs come across every now and then.
Example: we record EEG using dedicated sampling hardware plus behavior using one or more simple USB cameras. The two devices might have started at different time points, so there will be an offset between their time stamps. Also, the elphys rig and the cameras run on different clocks, so the time stamps of one have to be scaled to match the other. (Let's put the problem of time stamp jitter and missed samples aside.)
In order to keep track of what's going on, we record a common sync signal on both data streams, e.g. TTL pulses on an elphys channel and an LED driven by the same circuit that is visible on the video.
In order to synchronize the elphys and video data streams, a simple solution would be to view them next to each other and to allow the user to mark the time points of corresponding events on both. These could then be used to find the transformation between the time stamps of the two data streams. Finally, one would like to see the the elphys data and the video aligned.
I wonder if you would recommend using Brainstorm for this purpose.
Peter

I think most of the building blocks you need are in Brainstorm, but it would probably require some Matlab scripting at some point.

The video streams are poorly sampled, and probably never show any information that requires a precision higher than 0.5s. The loose sync mechanism illustrated in the tutorial was satisfying for the epileptologists. The problem for very long recordings is that the clock in the video files can be slightly off, causing an increasing offset between the video and the ephys recordings. Please test it and tell us what are your thoughts about it.
https://neuroimage.usc.edu/brainstorm/Tutorials/Epileptogenicity#Video-EEG

Regarding signals acquired by different systems at different sampling rates:
Brainstorm can't handle in the same files different frequencies, and can't display simultaneously files with different sampling rates.

Two solutions for this problem:

  • Resampling: Interpolating the signal at the lower frequency to match the time definition of the higher frequency. This could work for adding a few channels of behavioral data to the ephys recordings.
    • If the signals are in the same file, this can be implemented directly in the file reader. Example of Matlab code from the Spike2 reader, which can handle multiple sampling rates:
      https://github.com/brainstorm-tools/brainstorm3/blob/master/toolbox/io/in_fread_smrx.m#L78
    • If the signals are coming from different files, this has to be done manually by manipulating directly the file structures (there are tutorials explaining most of it, let me know what info you need).
  • Process separately and only transfer the relevant information from one file to another. This what we recommend for synchronizing eye trackers with EEG: we don't need all the eye tracking information to process the brain signals, we only need a few measures derived from it (blinks, saccades, microsaccades). Therefore we detect these features in the eye tracker recordings, then project the detect events to the EEG recordings. Sync mechanisms are documented in this tutorial:
    https://neuroimage.usc.edu/brainstorm/Tutorials/EyetrackSynchro

Hello Francois,
So I've just managed to create a video using Brainstorm for some data using scans. However, I'm trying to see if I can sync up the events with the video so that when an event happens, it shows up on the video. Is there a way I can do this using Brainstorm? I went to the video features link in the post that I'm replying to but when I added a synchronized video, nothing happened. How do I know if it worked and if the video and the data synced up?
Thanks!
Sincerely,
Arul

The "Synchronized video" features of Brainstorm refer to clinical video-EEG, or video-SEEG (recordings of a patient in his/her bed with a camera).

If you generate a video from a Brainstorm figure (2D topography or 3D cortical maps) and want to see when some events marker occur, the easiest solution is probably to capture simultaneously the time series figure (with events represent as vertical lines or dots) and the 2D/3D figure(s).