Hello,
I am interested in analyzing some source estimation results obtained using Brainstorm during an overground locomotion task. The dataset includes sections of overground walking. We recorded simultaneous EEG, EMG, and motion capture (MoCap). I used the MoCap data to segment EEG data within individual gait cycles. However, because this is an overground task, the gait cycles are not exactly the same length. My belief (based on prior work in our lab and other labs) is that any cortical involvement is phase-locked to gait events (i.e., phases of gait segmented by toe off and heel strike events). So, once I have estimated the source projections, I would like to average across gait cycles. This requires timewarping my results across gait cycles to the average latency between gait events. For example,
GaitCycle1: RHC----------------LTO-------------------LHC----------------RTO------------------RHC
GaitCycle2: RHC------------LTO-------------------------LHC----------------RTO------------------RHC
RHC = right heel contact; LHC = left heel contact; RTO = right toe off; LTO = left toe off
Question 1 : Is it acceptable to interpolate the source results for each gait cycle to the average latency between events and then proceed to average? Clearly this cannot be done with the raw data or the frequency content of the signals would be distorted.
Question 2: If so, is it possible to somehow detach the source results from the original data? If I timewarp each segment, the source results will not be the same time length as the data used to estimate them.
Workflow:
- Clean data using custom scripts and some EEGLAB functions
- Segment EEG using MoCap data
- Create protocol in Brainstorm.
- Add subjects, import digitized electrode locations, and generate head model
- Import cleaned eyes-open rest data and compute data covariance. I am using the identity for noise covariance since I cannot accurately estimate sensor noise. I am happy to hear thoughts on this.
- Link cleaned task data to subject using bst_process(‘CallProcess’, ‘process_import_data_raw’,…)
- Estimate sources - I am playing with MN constrained and LCMV unconstrained based on some recommendations in the tutorials. I am also happy to hear thoughts on this.
- Time warp to average gait events
- Average across trials within subjects, and then across subjects to obtain a grand average.
- Statistics
I look forward to hearing thoughts on this and please forgive me if I missed a similar discussion on this in previous posts.
Thanks!
Justin
Hi Justin,
Question 1 : Is it acceptable to interpolate the source results for each gait cycle to the average latency between events and then proceed to average? Clearly this cannot be done with the raw data or the frequency content of the signals would be distorted.
If the goal is only to produce an ERP, with no time-frequency or connectivity analysis, reinterpolating the EEG recordings in time may help you observe effects with slow dynamics across trials and subjects. Another option is to reinterpolate time-frequency decompositions of the single trials, this could be better to observe faster components.
Question 2: If so, is it possible to somehow detach the source results from the original data? If I timewarp each segment, the source results will not be the same time length as the data used to estimate them.
I've just edited the process "Standardize > Uniform epoch time" to support correctly source results. It now detaches the interpolated source files from the recordings, by setting the field DataFile to .
5 Import cleaned eyes-open rest data and compute data covariance. I am using the identity for noise covariance since I cannot accurately estimate sensor noise. I am happy to hear thoughts on this.
7 Estimate sources - I am playing with MN constrained and LCMV unconstrained based on some recommendations in the tutorials. I am also happy to hear thoughts on this.
@Sylvain? @John_Mosher?
Hi Francois,
Thank you for your reply and for updating the process. Just a quick follow-up, when you say to “reinterpolate time-frequency decompositions of the single trials, this could be better to observe faster components”, do you mean for each EEG channel or of the source results?
Thank you for editing the process to detach the source results from the data. If I use the approach above and I interpolate the data and the source results (i.e., (1) import data -> (2) compute source results -> (3) interpolate source results and data from average length of trials), will the change to the data cause any additional updates to the source results since they are still attached?
Justin
Thank you for your reply and for updating the process. Just a quick follow-up, when you say to “reinterpolate time-frequency decompositions of the single trials, this could be better to observe faster components”, do you mean for each EEG channel or of the source results?
Either for the EEG recordings (as in this tutorial: https://neuroimage.usc.edu/brainstorm/Tutorials/TimeFrequency#MEG_recordings:_Single_trials), or for a few ROIs. I think that if there are some interesting effects you can find the time-frequency plane, you should be able to see directly in the sensor space, so maybe it's not necessary to move to source space for the time-frequency analysis.
If I use the approach above and I interpolate the data and the source results (i.e., (1) import data -> (2) compute source results -> (3) interpolate source results and data from average length of trials), will the change to the data cause any additional updates to the source results since they are still attached?
Ah... I didn't consider the case where you re-interpolate recordings for which you have already computed sources... in this case, the attached source file would become incorrect.
But if you want to interpolate the recordings, then do this first and then estimate the sources. This is a lot easier, much faster, and would give similar results.
Hi Francois,
Thank you for your thorough response. I will give these suggestions a try.
Best wishes