Checking to see if my script for extracting EEG source data is correct

Hello Francois, I was hoping you could check over my source script to see if I followed the directions right. Here’s a little info about the EEG data. The EEG data was all pre-processed (cleaned) in eeglab and is then imported into brainstorm. It is a continuous file of 3 1 minute recordings of pre stimulation baseline, during stimulation period, and baseline post stimulation period. No individual MRIs were used. We are using brainstorms default head model.
Here is my source script

  1. Create new protocol
    a. Yes use protocols default anatomy
    b. No, use one channel file per acquisition run
  2. Create subject (under anatomy tab)
    a. Yes, use protocols default anatomy
    b. No, use one channel file per acquisition run
  3. Import MEG/EEG (click on subject and import MEG/EEG)
    a. EEG scaling factor set to 10 mm
    b. Import EEG file since this is a continuous data file I unselect “use events”, but click on “create a separate folder for events”

c. (Sensors/anatomy registration) refine registration now? YES
It still looks odd

d. To fix this I click on EEG lab channels--> MRI registration --> edit --> head pops up and I click on “project electrodes to surface” and it looks perfect.

  1. Create head model --> Click back on subject and click “create head model” I check if the fiducials are correct and they always are by default I think? I’ve never had to move them. Question: If you are using the default head model, would you ever need to change the fiducials?
    a. OpenMEEG BEM pops up and I click --> cortex surface and forward modeling methods “openMEEGBEM”
    b. For BEM layers and conductives I leave everything at default

  2. Create Sources
    a. I click on the data set folder and click --> noise covariance --> no -noise modeling, since all of my EEG imported is meaningful data
    b. I click back on the data set folder and click --> compute sources 2018 --> since I am using continuous data (non-focal data) I use min norm imaging --> for measure I use sloreta . Question: Is it ok to use min norm imaging and sLORETA with no noise data covariance?

  3. Extracting source data
    a. To extract my source data I click on my raw file --> move to the files to process box --> click on the source brain on the left and then run
    b. Pipeline editor process section pops up --> extract

c. Lastly here a pic of what I select. Question: If im using 1 continuous file, I don’t understand the difference between concatenate signals (dimension 1) vs concatenate time (dimension 2). Both options give me the same values. Im assuming this has something to do with ERPS

d. After I extract my values I do further analysis in matlab and treat this source date the same exact way as EEG data when applying these matlab scripts for different analysis (wavelet transformation, power analysis.. partial directed coherence etc….)

Im hoping this is all correct? Thanks for your time! Sorry this is so long.

Most of it looks good.
A few comments below.

I don't understand this part. If you do not have events in your 3x 1min continuous recording, import them without selecting any events. Then move the files around the way you want (create folders, rename...), just organize the files in a way suitable for your. As you used "No, use one channel file per acquisition run", if you are using multiple subfolders, you would need to copy-paste all the info needed for source estimation (channel file with correct positions, forward model, inverse model), into all the folders. If you don't really need multiple subfolders, it might be easier to have all the blocks of recordings in the same folder.
But these are only comfort considerations, you probably figured out something that works for you, no need to change anything.

(Sensors/anatomy registration) refine registration now? YES

If you are using an anatomy template, you should not try to refine the registration between the electrodes positions (coming from an EEGLAB template?) with the head surface in the Brainstorm database. The two surfaces don't match, and using a rigid Iterative Closest Point algorithm to align them WILL fail.

You should answer "no" to this question. Then place the electrodes the way you want on the head (right-click on the channel file > MRI registration > Edit, Rotate/Translate/Resize/Project), or use template positions available for the ICBM152 anatomy (right-click on the channel file > Add EEG positions > ICBM152 > ...)

If you are using the default head model, would you ever need to change the fiducials?

No. This is irrelevant for EEG most of the time.
Instead you modify manually the positions of the electrodes on the surface of the head.

I click on the data set folder and click --> noise covariance --> no -noise modeling, since all of my EEG imported is meaningful data

Don't you have a pre-stim baseline?
You'd get much more interesting information by constrasting the "during stim" and "post stim" recordings vs. "pre stim" recordings.

min norm imaging and sLORETA with no noise data covariance?

There is no clear counter-indication, but you might get weird central localizations.
Note that sLORETA is popular in EEG because of another implementation of sLORETA from R Pascual-Marqui. Among the Brainstorm and MNE-Python developers, we are more familiar with dSPM.
The two might look very different for a single file, but ideally, after doing all your group statistics (contrasts between groups of between experimental conditions), you would obtain similar output with both.

b. Pipeline editor process section pops up --> extract

If you are computing scouts times series only, prefer using the process "Extract scouts time series" instead.

Question: If im using 1 continuous file, I don’t understand the difference between concatenate signals (dimension 1) vs concatenate time (dimension 2). Both options give me the same values.

This is relevant only if you have multiple files, it defines how the scouts time series extracted from multiple files are concatenated together.

d. After I extract my values I do further analysis in matlab and treat this source date the same exact way as EEG data when applying these matlab scripts for different analysis (wavelet transformation, power analysis.. partial directed coherence etc….)

Sounds good.

Hi Francois,

Thank you very much for your helpful reply!

  1. Concerning the electrode positions, if do it in this exact order--> I click on EEG lab channels--> MRI registration --> edit --> REFINE registration points ---> THEN project electrodes on scalp, it give me this image, which looks good to me, the electrodes appear to be in the correct position, what do you think?

If i do it in any other order, it does not give me the image that i think is correct above. Do you think this is how it is suppose to look? I can't use the template file unfortunately, I am using a sub-set of electrodes.

  1. Concerning the no noise covariance. Yes we do have a baseline of 60 seconds. However, we will use this baseline data for event related synchronization/de-synchronization for the during and post stim data. In addition, in a separate analysis, we also comparing baseline day 1 data to baseline 3 month post effects. If we use the baseline day 1 data as the noise covariance, would this corrupt our event related synchronization /de-synchronization analysis? Or is the noise-covariance sort of like a event-related synchronization/de-synchronization analysis on it's own? Perhaps a simple power analysis using the baseline data as the noise-covariance and during/post stim as the data im extracting will give me similar if not better results than the original plan for ERS/ERDS?

I guess im just confused as to what noise covariance does to the baseline data, and how that my impact or improve our analysis. Lastly, our baseline, during and post stim data are different lengths (because of the cleaning methods we use) but they are roughly similar time windows. Dont know if this would be a problem.

Thanks very much for your help!

Concerning the electrode positions, if do it in this exact order--> I click on EEG lab channels--> MRI registration --> edit --> REFINE registration points

This is incorrect. As this method tries to align two surfaces that are different with a rigid registration method, it may fail completely, the outcome is random. There is no reason for which it should give a result you can trust.
As I already wrote, you should either register manually the electrode on the scalp, or use standard electrode positions available with the Brainstorm distribution.

which looks good to me, the electrodes appear to be in the correct position, what do you think?

We don't know what you can looks like and how it was placed on the head of your subjects: you are the only person who can decide on the quality of the electrode positioning.

If we use the baseline day 1 data as the noise covariance, would this corrupt our event related synchronization /de-synchronization analysis? Or is the noise-covariance sort of like a event-related synchronization/de-synchronization analysis on it's own?

@Sylvain, @pantazis @John_Mosher

I would recommend the least informative model for noise covariance, which is the "no noise" option. There is growing, strong evidence that baseline prestimulus activity conditions the event related response on a single trial basis.

Thanks
(I removed my previous comment to avoid any confusion)

Hi Francois,

Thank you for the help with the EEGlab electrode positioning. We ended up doing --> use default EEG cap--> ICBM152--> EGI--GSN HydroCel 256E1. Then in the edit channel file, we had to manually change E257 TYPE (which is electrode Cz) to Type = EEG instead of "No-loc" because it would not show up on the scalp. Cz then popped up, but it was inside the head. We manually changed the positioning of Cz so that it was in the correct spot according to our cap. But when we plot the scalp with electrodes on the scalp, Cz does not align with Z. In fact, it looks like the electrode behind Cz (E81) aligns with Z. Is Cz suppose to align with Z? If yes, is there a way to make xyz cross bars move forward?

Here is a picture

Thanks very much

Cz/E257 is the reference in EGI caps, this is why it is not included in the template positions in Brainstorm. You do not have data recorded at this site, I'm not sure that adding it is a good thing.

The Z axis depends on where the NAS/LPA/RPA points are placed. The SCS/CTF coordinates system used in Brainstorm does not define the Z axis as going through the vertex. You should not try to infer anything from the position of the intersection of the Z axis with the head surface.
https://neuroimage.usc.edu/brainstorm/CoordinateSystems

Hi Francois,
'thank you! We re reference to average reference in EEGlab so Cz can be included. I think our head model is complete now! Thank you so much for your help.

Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm

https://www.frontiersin.org/articles/10.3389/fnins.2018.00309/full