I need to analyze an EEG dataset from someone in my research group with a network analysis on source level as the final goal.The dataset consisted originally of 10 minutes resting-state EEG data (sampling rate = 512Hz), but has been split up in epochs of 3 seconds for manual artifact rejection. The dataset I've been given consists of 200 3s epochs for each participant where I know which epochs are "good" and which are "bad". Every epoch has been saved separately, so I have 200 separate files for each participant (saved as .mat files). My first idea was to load each epoch as a separate file into brainstorm and then compute the source level activity for each epoch. When all source level results are obtained, I would calculate some functional connectivity measure and average the obtained values from each epoch. I was reading on this forum however and read that resting-state data should be loaded in as raw data, meaning a single continuous file instead of separate epochs (see: Importing Resting State Data). My first question is why this should be done?
My second question is if there is a way to make a single continuous file from the epochs I currently have. I believe pasting them together would result in problems due to sudden voltage jumps between epochs when a "bad" epoch was present that has now been deleted. Maybe using a hanning window could be used to eliminate these voltage jumps, but I am not sure if overlap would be needed or not.
My first idea was to load each epoch as a separate file into brainstorm
This is what you should do if you have separate files in input.
then compute the source level activity for each epoch
Technically, you would compute only one inverse model for all the epochs, and access the sources for each epoch as links. If you are not familiar with these concepts, start by reading the introduction tutorials at least until #23. https://neuroimage.usc.edu/brainstorm/Tutorials
I was reading on this forum however and read that resting-state data should be loaded in as raw data, meaning a single continuous file instead of separate epochs
The best procedure depends partly on which connectivity measure you want to use.
For coherence for instance, it is best to compute it on long continuous recordings, where a few jumps are not too bad. But it could also be OK to compute the coherence independently on each segment of 3s, and average the results.
The decision on which solution is best is not trivial, and actively discussed in other threads of this forum, see the example below. Try to reproduce exactly what was done in your reference publications, as differences in the processing options may give completely different results.