Help with scripting multiple sessions per subject

Hello all,

I am analyzing 13 subjects’ EEG data; each subject was recorded from on 5 separate occasions/sessions. Each session comprised 180 auditory stimulations * 4 conditions. I do not have the subjects’ anatomy so I am using the default anatomy.

Despite the experimenter’s best efforts, I cannot assume the electrode cap was positioned exactly the same on each session, therefore I must process each session independently, then project to source space, z-score, spatially smooth, then concatenate source activations across sessions. Finally I can average the data, per condition, for each subject and perform statistical tests on the averaged-per-subject source data. Is that correct?

If so, I’m having trouble doing it in either the GUI or from a script (preferred).

How do I keep the sessions separate prior to concatenation?

I’ve made a little progress. My workflow, for one subject, is now as follows:

[ul]
[li]For each session
[/li] [LIST]
[li]Link to the raw file (EEGLAB format)
[/li] [li]Detect blinks
[/li] [li]ssp_eog
[/li] [li]import_data_event condition=num2str(sess_ix), createcond=0
[/li] [li]avg_ref
[/li] [li]baseline
[/li] [li]detectbad
[/li] [li]noisecov (full)
[/li] [li]zscore <- wrong
[/li] [/ul]
[/LIST]

Then I calculate the head model once.
Then, using the GUI, I copy-paste the headmodel to each condition (1-5).
Then I calculate sources for each condition. (Remember that a condition is a session).

Questions:
[ol]
[li]Is it OK to z-score the channel-data as long as I calculated the noise covariances before z-scoring? [B]No.[/B]
[/li][li]Is there a way to link the headmodel without copy-pasting? I need condition(session)-specific noise covariances and source kernels.
[/li][li]How do I get from 5 sessions * 4 stim_types per subject (see attached figure) to a single set of 4 stim_types per subject?
[/li][/ol]

Regarding the last question, I think that I cannot work in the channel-space because the source kernels are different. So to get one output per subject, I either need to calculate the averaged source for each session * stim_type then average those averages across sessions, or I need to use scouts within-subject. What is the recommended way?

If the recommended way is to use scouts, then how do I guess where to put the scout? From a single session? I guess that these scouts are the same scouts I will be using for the group stats, and it seems unlikely that a single-subject single-session will give me a good scout for that.

Edit: It seems it wasn’t the copy-pasting of the head model that was the problem. It was the calculating of the z-scores that prevented me from getting the sources linked. I guess that answers my question as to whether or not it is OK to z-score the channel data after calculating the channel covariances (answer: No).

Using the GUI or the built-in processes, calculating z-scores on sources is prohibitively slow. Also, the storage requirements are too large for me. Are there any other ways to join data across sessions while securing against one particularly high-gain session from having too much influence? It would be great if there was a way to calculate the source variances once then use 1/variances as an additional kernel to add on to the source calculation process. Does that make sense?

In the meantime, I will try to figure out how to calculate the z-score of the source manually (probably faster) and I will store only the average per session, then average across averages.

Hi,
Sorry for not replying earlier, I was away for a while.

This looks really good, you structured your analysis exactly the way it should be. You’re almost there, let me clarify a few details.

  1. If the electrodes were detached from the head between the sessions, it is correct to calculate one inverse kernel for each recording session as the noise covariance matrix is most likely very different across session (it is not really the electrode position that matters, but the quality of the connection with the skin, which is different each time you place the electrode).

  2. No, you can’t calculate a zscore on the recordings and then estimate the sources, you have to do it the other way. First you estimate the sources, then you calculate the zscore.

  3. What do you mean by “concatenation” of the of the source activation across sessions? Isn’t it an average of each subject + each condition that you want to do?

  4. The zscore of the sources takes a long time, because it has to recalculate the full source matrix (Nsources x Ntime). Before this step, the sources are saved in an optimized form: imaging kernel (Nsources x Nelectrodes) and recordings (Nelectrodes x Ntime). This multiplication is usually performed on the fly for visualization purposes, but some operations (including the zscore) require the explicit multiplication. Hence the computation time and the explosion of the storage.

  5. Luckily, the zscore operation is something you can do after the average (instead of trial by trial):

  • calculate the inversion kernel for each session
  • calculate the average of the sources for each condition (=groups of trials) AND subject: drag and drop all the database to the Process1 tab, select the sources files (button “Process sources” in tab Process1), and run the process Average > Average files > By trial group (subject average).
  • calculate the zscore of these averages (one per subject and condition saved in the “intra-subject” node)
  1. Head models: copy-paste between sessions and subjects is fine if you are using the same anatomy and the same electrodes positions.

Then, when you know how to do it with the interface, you should be able to replicate it easily in a script.

I hope this is clear, don’t hesitate to ask if you need more help.
Francois

Hi Francois,

Thank you for your detailed response.

I guess I didn’t make something clear. Each subject participated in 5 sessions, each on a different day, but the sessions themselves are identical in protocol. Within a session the subject is stimulated 720 times or 180 times for each of 4 different stimulus types (A-D). This type of protocol, wherein a subject returns to do the exact same experiment again and again, is fairly common in Brain-Computer Interfacing because it allows us to assess how robust our classification methods are and if the online feedback induces any learning.

The analyses I’m interested in performing relevant to this discussion are those that assess the differences in ERPs between stimulus types. At one level, I am looking at the single-subject single-session ERPs (e.g., unpaired test ~180 type-A vs ~180 type-B), and the way to do that is straightforward. I would also like to look at the grand level (5 subjects; paired, type-A vs type-B ERPs). For this type of test, each subject should be represented exactly once. Thus, I need to get a single ERP for each stimulus type for each subject for each source. The way to do that is not as clear.

For classification purposes, we might concatenate all the ERPs from all sessions into a matrix and use that to train the classifier. Recording quality varies across sessions so we might zscore each session’s 180*4 ERPs before concatenating. That works as long as we can reasonably z-score our new test data prior to classification.

However, I’m not classifying here, I’m visualizing average differences between ERPs of known class membership. Since the number of bad trials is roughly the same in each session, I am comfortable averaging the <180 ERPs within a session then averaging the 5 session-ERPs to get 1 ERP per subject (for each stimulus type). It’s not clear to me whether or not I should z-score the <180 ERPs first. Does the source transformation already account for the types of impedance/gain-related differences we might expect between sessions?

-Chad

Note that in the screenshot above, 1-5 refer to the same session number within a single subject.

I keep getting the following error when I’m trying to get a single per-subject average.


***************************************************************************
** Error: Line 107: Operands to the || and && operators must be convertible to logical scalar values.
** 
** Call stack:
** >db_combine_channel.m at 107
** >bst_process.m>GetOutputStudy at 993
** >bst_call.m at 26
** >macro_methodcall.m at 37
** >bst_process.m at 30
** >process_average.m>AverageFiles at 237
** >process_average.m>Run at 112
** >macro_methodcall.m at 30
** >process_average.m at 24
** >bst_process.m>Run at 153
** >bst_call.m at 26
** >macro_methodcall.m at 37
** >bst_process.m at 30
** >panel_process1.m>RunProcess at 119
** >bst_call.m at 28
** >macro_methodcall.m at 39
** >panel_process1.m at 27
** >gui_brainstorm.m>CreateWindow/ProcessRun_Callback at 651
** >bst_call.m at 28
** >gui_brainstorm.m>@(h,ev)bst_call(@ProcessRun_Callback) at 266
** 
***************************************************************************

Warning: Inputs must be character arrays or cell arrays of strings. 
> In process_average>AverageFiles at 239
  In process_average>Run at 112
  In macro_methodcall at 30
  In process_average at 24
  In bst_process>Run at 153
  In bst_call at 26
  In macro_methodcall at 37
  In bst_process at 30
  In panel_process1>RunProcess at 119
  In bst_call at 28
  In macro_methodcall at 39
  In panel_process1 at 27
  In gui_brainstorm>CreateWindow/ProcessRun_Callback at 651
  In bst_call at 28
  In gui_brainstorm>@(h,ev)bst_call(@ProcessRun_Callback) at 266 

***************************************************************************
** Error: [process_average]  Average > Average files
** Line 277: Attempt to reference field of non-structure array.
** 
** Call stack:
** >process_average.m>AverageFiles at 277
** >process_average.m>Run at 112
** >macro_methodcall.m at 30
** >process_average.m at 24
** >bst_process.m>Run at 153
** >bst_call.m at 26
** >macro_methodcall.m at 37
** >bst_process.m at 30
** >panel_process1.m>RunProcess at 119
** >bst_call.m at 28
** >macro_methodcall.m at 39
** >panel_process1.m at 27
** >gui_brainstorm.m>CreateWindow/ProcessRun_Callback at 651
** >bst_call.m at 28
** >gui_brainstorm.m>@(h,ev)bst_call(@ProcessRun_Callback) at 266
** 
** 
***************************************************************************

I get this whether I put all files (180 trials * 4 ‘conditions’ * 5 sessions), or if I first average within session then try to average the averages (4 ‘conditions’ * 5 sessions), or if I manually choose which files I want then average everything.

Hi Chad,

I need to get a single ERP for each stimulus type for each subject for each source. The way to do that is not as clear.

As I was saying in #5: calculate the average of the sources for each condition (=groups of trials) AND subject: drag and drop all the database to the Process1 tab, select the sources files (button "Process sources" in tab Process1), and run the process Average > Average files > By trial group (subject average).

But you're getting an error doing this, I need to fix this. Someone else is having the exact same problem, but I can't find where it is coming from without having access to the database.
Could you send me some data with dropbox (or on our FTP server if the files are too big).
Can you reproduce the error with just one subject? Try to identify what is the minimal configuration where you can reproduce this error, and then export the protocol (or just one subject) to a zip file and send it to me. Menu File > Export protocol, or right-click on a subject > File > Export subject.
It will be much easier for me to understand what is going on.

Does the source transformation already account for the types of impedance/gain-related differences we might expect between sessions?

Yes, transforming the recordings to the source domain is a way to abstract all the acquisition details: positions of the electrodes, impedance, noise levels...
This is why we encourage in MEG and multiple-session EEG to move quickly to the source space, so that you can average across sessions and subjects easily.
I would recommend you calculate the zscore only on averaged source maps.

I'm not sure about the concatenation/classifier thing: is it relevant here, or is it a different topic?

Francois

Francois,
Don’t worry about the classification thing.

It’s quite late here. I’ll send a small database tomorrow after I’ve minimized the data required to reproduce the problem.

Thank you for helping me with this,
Chad

Hi Chad,

I got your data and fixed the bugs. You can now compute those averages.
The update is online, just use the menu Help > Update Brainstorm.

I also noticed that the electrodes were not properly placed on the head.
I would recommend that you change this and re-calculate the forward and inverse models:

  • Right-click on a channel file > MRI registration > Edit…
  • Bring the EEG cap down (use the Z translation button, read the tooltip for help)
  • Project the electrodes on the head surface (another of the buttons in the toolbar)
  • Copy-paste the channel file to the other conditions
  • Re-calculate the head model and source estimates

Cheers,
Francois