Epoching based on extended event markers

Hello,

I have extended events of which I want to epoch per event in such a way that the epoch starts and ends at the beginning and end of an extended event respectively, skips the bad segment, then epoch the next extended event. In other words, the epoching of each extended event is done independently.

I selected the option of using the events to segment the epochs (image 2). However, I'm not sure if that approach is right.

Kindly assist.

Thank you.


If the segments you want to import (green, pink, blue) do not overlap with the bad segments, then selecting them in the way you did in the panel "Events selection" should work.

I have some questions regarding the options you selected:

  • The options seem to show that you are importing from a continuous file (with the tag "RAW" above the file icon in the database explorer), while the first image shows a 700s-file fully imported in the database (which is no recommended). Are these really matching? Why did you import the the entire file like this?
  • Split in time blocks of 2s?
    This splits each imported block (e.g. one "3 back" segment) in epochs of 2s.
    Is this really what you are expecting to do? What is your goal with these 2s-blocks?
  • Create a separate folder for each event: Selecting this options does not always make the database easier to navigate if you have multiple recording sessions for one subject. This depends on your objectives and preferences.

Hello,

Thank you for your response. Here are my responses to your questions (accordingly):

  1. I seem to not understand the first question completely. The initial recording had the events in form of single events (with stimulus and responses). So, I created an equally-timed extended event that captures each trial (stimulus and response). I labelled all the recording prior the start of the trial as bad segments so it isn't included in my analysis. Is that the right approach to doing it?

  2. I wanted each trial (0-back, 2-back and 3-back) to be epoched with a length of 2 seconds so I used the "split in time blocks of 2s" option. Does that tally with what you said?

  3. As seen in the image attached, there were three recording sessions in all and I wanted to make some comparison between the events (e.g 0 back versus 2 back). This prompted my decision to have each event in separate folders and then combine all the data in the entire session into one file. Is this the best approach that should have been followed? I seem not to be getting my desired result.

I seem to not understand the first question completely.

Figure 1 shows a file that was imported with the menu "Import MEG/EEG" (when right-clicking on a subject) or "Import in database" (when right-clicking on a "Link to raw file" already available in the database). This is a full copy of the recordings saved into the database, on which the features only available on continuous files (e.g. ICA/SSP cleaning) are not available anymore.
We do not recommend working with fully imported files like this, this is very inefficient. For working on the creation of your events, you don't need the data to be imported, you only need a "Link to a raw file" (what you get with the menu "Review raw file").
https://neuroimage.usc.edu/brainstorm/Tutorials/ChannelFile#Review_vs_Import

The initial recording had the events in form of single events (with stimulus and responses). So, I created an equally-timed extended event that captures each trial (stimulus and response). I labelled all the recording prior the start of the trial as bad segments so it isn't included in my analysis. Is that the right approach to doing it?

I don't understand why you need to create bad segments.
From the continuous file (rather than the fully imported file), you create extended events, and then you import these extended events as epochs into the database. I don't understand what the role of bad segments is, here.
The only obvious reason I could see would be to compute a PSD from the continuous file using only the "non-bad" segments. Is this what you tried to do?

I wanted each trial (0-back, 2-back and 3-back) to be epoched with a length of 2 seconds so I used the "split in time blocks of 2s" option. Does that tally with what you said?

If you expect to get 2s-blocks, your selection of options is correct, no problem.
My concern is more about the use of these 2s-blocks. I've seen many Brainstorm users trying to average short blocks like this, which is most cases makes no sense. What are you expecting to do with them?

This prompted my decision to have each event in separate folders and then combine all the data in the entire session into one file. Is this the best approach that should have been followed?

This is correct, but only if the channel file is strictly the same across all the recording sessions. Be aware that the channel file includes the SSP and ICA component selection. If you have removed some ICA components independently from each recording session, then the three channel files contain different information. See the field Projector in the channel file : https://neuroimage.usc.edu/brainstorm/Tutorials/ArtifactsSsp#On_the_hard_drive
The imported trials are all correctly cleaned with the correct information, but if you group the trials from the 3 recording sessions in the same folder, then the description of the removed components becomes inaccurate (or more precisely, only one channel file - and IC selection - is kept and considered to be applied on all the trials).
This has no impact if you work only at the sensor level (computing averages, statistics, time-frequency, connectivity measures). However, if you are expecting to do some source estimation from these recordings, this might cause some unwanted biases: the mixing matrix corresponding to the IC selection (saved in the Projector field of the channel file) is applied to the forward model before the computation of the inverse model. This might result in applying the wrong mixing matrix for two of the three recording sessions. This impact is possibly very minor, but formally, it still better to avoid it.

I apologize for the complexity (and possible lack of clarity) of the explanations above.
The take home message is simply: if you used ICA for cleaning the artifacts AND you want to estimate sources for these recordings, prefer NOT using the option "Create a separate folder for each event type". In all the other cases, what you did is OK.

Additional comment about preprocessing:
I think I would have preferred a different solution for the order of the preprocessing:

  1. band-pass filter (the higher the sampling frequency, the better the result of the filtering), then
  2. Import with the resampling option selected.

Thank you for your explicit and well detailed response. The goal of preprocessing the raw datafile is to generate functional connectivity using the data.

As regards your second comment, I named the other segments as bad segments to exclude those segments in the epoching stage of processing the data. Since epoching has to be done prior to performing functional connectivity of each event (0 back, 2 back and 3 back), I intended on using an epoch length of 2 seconds on each extended event. That prompted my decision to use time blocks of 2seconds. Please correct me if that does not achieve the result.

Concerning your third comment, since I'm performing connectivity measures, I think it will not affect the result (just like you said). However, subsequent times or whenever I'm performing source estimation, I'll uncheck that option.
Just a quick question on that, in a situation whereby source estimation is used and the option to create separate folder is unchecked, will the option of concatenating all the folders and creating one connectivity matrix still work (as seen in the attached figure)?

I named the other segments as bad segments to exclude those segments in the epoching stage of processing the data. Since epoching has to be done prior to performing functional connectivity of each event (0 back, 2 back and 3 back), I intended on using an epoch length of 2 seconds on each extended event. That prompted my decision to use time blocks of 2seconds. Please correct me if that does not achieve the result.

Are these bad segments overlapping with the segments of interest?
If they don't, then I think they are useless... probably harmless too, except for them requiring extra work.

Just a quick question on that, in a situation whereby source estimation is used and the option to create separate folder is unchecked, will the option of concatenating all the folders and creating one connectivity matrix still work (as seen in the attached figure)?

Yes, this option should remain available.

No they do not overlap. Was the decision to use the time blocks right?

Assuming I did not note the segments as "bad", if I select ONLY the extended events of interest (0 back, 2 back, 3 back) while epching, will other segments be added in my analysis. In short, will the time for only the selected events be used while epoching?

Thank you for your response.

Was the decision to use the time blocks right?

Yes, this looks correct.
But if all the blocks have the same duration, you could have used simply the simple events indicating the beginning of the blocks a the option "Epoch time" (from 0s to the duration of the blocks), still combined with the "split" option. I think this would have produced the same output.

if I select ONLY the extended events of interest (0 back, 2 back, 3 back) while epching, will other segments be added in my analysis

Only the segments you select in the Import options are imported.

Thank you

Hello,

I just understood what you meant in this part. Thank you for spotting that! I used review raw file to create a link to the raw file, then I exported the dataset to MATLAB using the link (figure 1), made some changes to its initial events (figure 2) and imported the file from MATLAB using the link to raw file to continue further operation. Is this an efficient way of importing the dataset to BST and will this import the raw file or just a link to it?
If it's not an efficient way, what approach would you suggest I use in importing a link to the raw file after editing the file in MATLAB?

Also, I was able to perform ICA using this approach but I'll love to know if there is a more appropriate way of doing it.


This looks good.
If you don't have hundreds of files to process in this way, and if you don't expect to produce a script that reproduces all the analysis from the raw file, then it is probably OK. Alternatively, you could load/save the link .mat directly:
https://neuroimage.usc.edu/brainstorm/Tutorials/Scripting#Example:_Editing_events

However, you never need to import the whole file as one block, as you initial screen capture showed.

Also, I was able to perform ICA using this approach but I'll love to know if there is a more appropriate way of doing it.

What you did is OK only if you are not expecting to compute sources.
Otherwise, you'd rather import the trials of each session in separate folders, and compute a different inverse model for each session.

Thank you!

You've been helpful

Hello,

I want to perform functional connectivity (FC) on some selected extended events in my recording (the coloured segments in figure 1) and as seen, there are other segments that are not needed for my analysis. If I specify that BST should use events when epoching (fig 2), will the segments not needed still be considered by BST in further analysis I'll be doing (like when I select "All files" when computing FC)?

02

If I specify that BST should use events when epoching (fig 2), will the segments not needed still be considered by BST in further analysis I'll be doing (like when I select "All files" when computing FC)?

If you do the following:

  1. Link the original raw file to the database (right-click on an empty subject > Review raw file)
    https://neuroimage.usc.edu/brainstorm/Tutorials/ChannelFile#Link_the_raw_files_to_the_database
  2. Edit the .mat file representing the link to the continuous file to add your custom events (as discussed earlier)
    https://neuroimage.usc.edu/brainstorm/Tutorials/Scripting#Example:_Editing_events
  3. Import the green/yellow/blue extend events as new epochs in the database (right-click on the Link to raw file > Import in database)
    https://neuroimage.usc.edu/brainstorm/Tutorials/Epoching
  4. You select these imported epochs in Process1 tab and run any process on them
    https://neuroimage.usc.edu/brainstorm/Tutorials/PipelineEditor

=> Then the process would run only on the files selected in the Process1 list, there is no reason for Brainstorm to go back to the original continuous files.

Maybe there is some additional problem I don't understand here. I apologize if there is something obvious I missed from your description.

Thank you
That was helpful!