Application of projectors

Hi Francois,

Can you please check to make sure projectors are properly applied?

I preview one raw file, compute blink projectors (computed very fast), then import several trials to the database.
Then I preview a second raw file, and compute a separate set of blink projectors specifically for this file. However, during this procedure brainstorm takes a lot of time with a popup window reporting: ‘applying projectors’. Does this mean that the projectors of the second raw file are applied to the trials of the first raw file? This should not happen, I only want the new projectors applied to the new trials to be produced by the second raw file.

Thank you,
Dimitrios

1 Like

Hi Dimitrios,
I cannot test this unless I have a very precise idea of what your database looks like.
Can you send me a screen capture of your database structure, that clearly shows where are the channels files, the two continuous files, and the imported trials?
And also a screen capture of the suspicious message window, or the exact text of the message? (I could not find a “applying projector” message anywhere in the software)
Thanks
Francois

Hi Francois,

I have uploaded data on the neuroimage server (home/ftp_neuroimage/ftp): MIT017_session1_tsss.fif and MIT017_session2_trans_tsss.fif. I perform the following steps in the analysis:

  1. preview the session 1 raw file.
  2. detect blink events (using channel BIO001).
  3. compute ssp for blinks. I explicitly set ‘MEG’ in the sensor types
  4. compute ssp for blinks. I explicitly set ‘EEG’ in the sensor types.
    Both projectors work fine. However, I cannot see the topography of the EEG projectors. For example see:
    https://dl.dropboxusercontent.com/u/4202951/toFrancois/screenshot1%20-%20error_cannot_see_EEG_projectors.jpg
    The left image is the first component for MEG; the right image is the first component for EEG. The latter is not visible. It also incorrectly states ‘MEG’ in the figure title.
    It would be nice to a) see it! and b) indicate in the projector name whether it applies to MEG or EEG data.
  5. import trials for trigger ‘1’
  6. preview the session 2 raw file.
  7. detect blink events (using channel BIO001).
  8. compute ssp for blinks. I explicitly set ‘MEG’ in the sensor types
  9. compute ssp for blinks. I explicitly set ‘EEG’ in the sensor types.
    Again, the EEG projectors are not visible. See:
    https://dl.dropboxusercontent.com/u/4202951/toFrancois/screenshot2%20-%20viewing%20projectors%20for%20second%20raw%20file.jpg
  10. when I plot the session 1 raw file, it has a total of 4 projectors. Basically, session 2 projectors were copied to session 1 projectors. This is fine, someone may want the new projectors available to all files. But they should not be automatically turned on! The default should be ‘on’ for the session they were created for, and ‘off’ for all other sessions unless the users needs them on.
  11. even worse, I checked my session 1 imported trials. They have been modified because session 2 projectors were automatically applied to trials imported from session 1 (e.g. trial1.F(100,1) was 4.7498e12 and now is 4.7380e-12). Francois, please fix this, this is important! We should get a message saying: do you want your new projectors to be applied to the imported data? The default should be ‘no’! This is particularly important for scripting, right now my projectors are applied incorrectly, and depending on their origin and process order, some trials are subjected to multiple projectors from different raw files. Help!

Thank you!
Dimitrios

Hi Dimitrios,

4a) SSP EEG+MEG: fixed
4b) Modality in the projector name: done

10-11) The projectors need to be defined at the level of the channel file because they are used in the forward model. Which means that you cannot defined the projectors to apply independently to different sessions that are sharing the same channel file.
So you have to change the definition of your subject and use the option “No, use one channel file per condition”, so that you have one channel file per run.
This should ALWAYS be your configuration when processing multiple runs, because the head position is not the same in the two sessions. What you did in this protocol is incorrect, as the channel file in from the first session was overwritten by the one coming from the last file you linked to your database.

Cheers,
Francois

Hi Francois,

Thank you for fixing the bugs. Regarding the channel files vs. multiple runs: using maxfilter from Elekta we can align multiple runs to the first run, so in principle a single channel would be enough (all data have exactly the same position). However, I now realize the problem of storing the projectors to the channel file because they need to be used in forward modeling. I was about to suggest you include the projectors at the raw data links, but I see why this is not practical. Oh well, the world is not perfect.

Thank you,
Dimitrios

Hi Francois,

One more question: the projectors are stored in the channel file, but what about the F data in the trial files, is the projector irreversibly applied to the data, or applied on the fly during visualization?

If it is permanently applied to the data, what happens when we enable/disable/create projectors at a related raw file which shares the same channel file? For example, assume some trials are already imported with a projector applied, then I disable the projector at the raw file, and then I import more trials. Will all trials have the same projectors?

Thank you,
Dimitrios

Hi Dimitrios,

No, there is no efficient support of the SSP for the imported data. You should not modify the projectors after epoching the recordings.
Once projectors are applied to imported epochs, the values of the F matrix are modified and cannot be removed.

If you calculate new projectors for a subject where you have raw + imported recordings, it would apply the projector to all the files (link + epochs).
But if you check or uncheck the selected projectors while reviewing a continuous file, it has no effect on the epochs already imported.

In the case you describe: the first batch of trials would have the projector applied, not the second.

Cheers
Francois

Hi Francois,

Another question about the projectors: when I use the process window to create a link to a raw file (process->Import Recordings->Create link to raw file), I get a script that produces the required link, but always deletes the previous channel file. Unfortunately, this implies that I also lose the projectors that were computed from the previous raw file. Can you make an option in the script to keep the previous channel file?

% Process: Create link to raw file
sFiles = bst_process(…
‘CallProcess’, ‘process_import_data_raw’, …
sFiles, [], …
‘subjectname’, SubjectNames{1}, …
‘datafile’, {RawFiles{1}, ‘FIF’}, …
‘channelalign’, 0);

Thank you,
Dimitrios

Hi Dimitrios,

I’ve added an option ‘Replace existing channel file’ in the process “Import recordings > Create link to raw file”.

But I would recommend that you never uncheck this option: you should have one channel file per run in your database, and you should calculate your projectors individually for each FIF file.
Plus, if you add a new file after calculating the projectors on another one, the behavior can be random: in some cases it would use the projectors from the “global channel file”, and in some others it would ignore them. You shouldn’t organize your data this way.

Let me know if you need more information on how to safely arrange your database.
Francois

Hi Francois,

I understand this is a very complicated issue.

Brainstorm stores the projectors inside the channel file because they are used for forward modeling. I would like to argue that this is not the proper way, because it ties subject registration to projectors, which are two very different processes.
-projectors are created using noise covariance (ssp), eyeblinks (svd on data templates), heartbeats, etc. In all the cases that come to mind (and correct me if I am wrong), projectors are defined from the MEG data without any dependence on the subject registration.
-Subject registration changes the position of the channels, but does not affect the projectors.

By combining the two in the same file, there are problems. On multiple occasions, (refining subject registration using polhemus points etc), I want to revert to the previous registration (nice if such option existed), but the only way to do so is to reimport the channel file. But by importing a channel file (directly from the fif file), I lose all my projectors. Since misregistration is worse than suboptimal modeling, I always opt to correct registration.

To resolve this issue, I recommend saving the projectors in a separate file, always next to a channel file.

Regarding the suggestion ‘you should have one channel file per run in your database, and you should calculate your projectors individually for each FIF file’. It is not that simple. Nearly all our MEG scans result in several fif files that:
-share the same registration. (A) several short recordings for the same subject produce different fif files that are aligned using the maxfilter trans option; or (B) a single long run produces multiple fif files because the acquisition computer cannot handle large files)
-share the same projectors (eyeblinks, ssp etc).
Requiring separate channels and projectors per fif file is not practical, because most of the times they share this information.

Minor comment: Also, calculating projectors separately per fif file can produce unstable results (case A).

I guess all the above problems would be solved if there was a way to combine several fif files together. Then there would be a single fif file per run (rather than several with the same registration), and no changes in Brainstorm would be necessary. Do you know a software that combines different fif files into one?

Thank you,
Dimitrios

Best,
Dimitrios

1 Like

Hi Dimitrios,

projectors are defined from the MEG data without any dependence on the subject registration

The topographies that we identify as representative of an artifact is dependent on the subject's position in the MEG.
Change the subject position a little bit and the cardiac artifact will look different on the sensors.
The SSP projectors are build based on those spatial configurations of values, therefore the projectors are dependent on the subject's registration.

On multiple occasions, (refining subject registration using polhemus points etc), I want to revert to the previous registration (nice if such option existed)

Added to the to-do list, but it will take a very long time before I get there.
To save easily to channel files before modifications, you can create a separate condition / subject and copy-paste your channel file to it (CTRL+C / CTRL+V).

To resolve this issue, I recommend saving the projectors in a separate file, always next to a channel file.

This requires deep changes in the database structure that I won't be doing immediately (an incredible amount of work and painful transitions for most of the users).
Have you noticed that you can import/export your projectors manually in the "Select active projectors" window? You can manage your projectors the way you want.

Nearly all our MEG scans result in several fif files that:
-share the same registration. (A) several short recordings for the same subject produce different fif files that are aligned using the maxfilter trans option; or (B) a single long run produces multiple fif files because the acquisition computer cannot handle large files)
-share the same projectors (eyeblinks, ssp etc).

If you the runs are using the exact same registration, yes, no problem for using one shared channel file. In this case: let me know if there are missing options to let you do this easily.
If they don't: you should calculate different projectors, or co-register them before doing any other operation.

Minor comment: Also, calculating projectors separately per fif file can produce unstable results (case A).

Co-register them before calculating the projectors.

Do you know a software that combines different fif files into one?

No... I know that there are things that could be improved in the database structure to manage multiple runs, but I don't think I will have time to work on that during the next 12 months...

Francois

Dear Dimitrios and all, in particular Neuromag users,

I am still terribly confused as to what is the best strategy to use to compute SSP projectors for blink/cardiac artifact removal in Neuromag data. Typical datasets from my lab contain 4 to 8 separate runs from each subjects.

One source of complication is that for Neuromag data, co-registration of head positions across several runs is usually computed at the first preprocessing step together with Maxfilter, and blink/ECG SSPs are usually computed afterwards.
I wonder whether:

  • Since co-registration has already been computed, does it make sense to compute SSPs for each run separately, or it’s reliable to compute it on all runs together? As Francois suggested in a previous thread, this could be done by extracting blink epochs from all runs and compute the SSP on that:
    http://neuroimage.usc.edu/forums/showthread.php?1113-EOG-ECG-SSP-over-several-runs&p=4810#post4810
    Also, this would allow (contrary to the first solution) using one channel set per subject, which is much more practical for the following steps.
  • Otherwise, should I first apply Maxfilter WITHOUT Maxmove, compute SSPs over each run, and only then co-register by applying Maxmove? And what would happen to SSP projections in this case?

Thanks for your feedback,

Marco

[QUOTE=pantazis;5661]Hi Francois,

I understand this is a very complicated issue.

Brainstorm stores the projectors inside the channel file because they are used for forward modeling. I would like to argue that this is not the proper way, because it ties subject registration to projectors, which are two very different processes.
-projectors are created using noise covariance (ssp), eyeblinks (svd on data templates), heartbeats, etc. In all the cases that come to mind (and correct me if I am wrong), projectors are defined from the MEG data without any dependence on the subject registration.
-Subject registration changes the position of the channels, but does not affect the projectors.

By combining the two in the same file, there are problems. On multiple occasions, (refining subject registration using polhemus points etc), I want to revert to the previous registration (nice if such option existed), but the only way to do so is to reimport the channel file. But by importing a channel file (directly from the fif file), I lose all my projectors. Since misregistration is worse than suboptimal modeling, I always opt to correct registration.

To resolve this issue, I recommend saving the projectors in a separate file, always next to a channel file.

Regarding the suggestion ‘you should have one channel file per run in your database, and you should calculate your projectors individually for each FIF file’. It is not that simple. Nearly all our MEG scans result in several fif files that:
-share the same registration. (A) several short recordings for the same subject produce different fif files that are aligned using the maxfilter trans option; or (B) a single long run produces multiple fif files because the acquisition computer cannot handle large files)
-share the same projectors (eyeblinks, ssp etc).
Requiring separate channels and projectors per fif file is not practical, because most of the times they share this information.

Minor comment: Also, calculating projectors separately per fif file can produce unstable results (case A).

I guess all the above problems would be solved if there was a way to combine several fif files together. Then there would be a single fif file per run (rather than several with the same registration), and no changes in Brainstorm would be necessary. Do you know a software that combines different fif files into one?

Thank you,
Dimitrios

Best,
Dimitrios[/QUOTE]

Hi Marco,
The goal of the SSP computation/projection is to remove a topography that represents the artifact. It would be my opinion that the SSP’s should be computed separate for each run, but if all runs have been co-registered, then the topography of the eye blinks and the heartbeats should be the same across runs since the eyes and the heart are in the same place relative to the sensors. I would suggest you try the approaches that you mentioned and determine which one works best for you data. You can determine how much artifact is left by importing the blinks and/or beats and doing an average. Then compute the SSP’s and project the artifact away. Then import this ‘clean’ data and average to see if there is any artifact left.
Hope this helps.
Beth

Thanks Beth for your feedback.

I have compared the two approaches:

  1. Local: Compute SSPs for both blinks and cardiac artifacts separately on each run (here the first run of my dataset)
  2. Global: Compute SSPs for both blinks and cardiac artifacts on all runs together as you indicated.

My dataset consists in 8 runs of 6 minutes each, already co-registered by MaxMove (Neuromag software).

Here I attach:

  • The blink and cardiac artifact for MAG (top row) and GRAD (bottom row, topography is the norm of the grads), first run.
  • The SSP projectors for MAG and GRAD for blinks (top row) and cardiac (bottom row) for approach 1) (local) and 2) (global). They look very similar, though blink topographies are more localized around the eyes for the 'local' approach, and cardiac topographies look also slightly cleaner.
  • Comparison of residual blink for local (left side) vs global (right side) approach.
  • Comparison of residual cardiac for local (left side) vs global (right side) approach.

As you see, though most of the artifact is removed with both approaches (note the change in scale of the amplitude), the local approach is more successful.
Of course, it is just an example, I cannot generalize.

Is this due to an inefficient co-registration, or is it possible that blinks/cardiac artifacts change during the recordings?

I'd be happy to have feedback from other Neuromag users.

Thanks,

Marco







I add two more figures to my previous post concerning another approach that I happened to use:

  • Compute SSPs on one run (in this case run 8) and apply them on all other runs.

Figures show:

  • Comparison of residual blink for local (left side) vs 'other run' (right side) approach.
  • Comparison of residual cardiac for local (left side) vs 'other run' (right side) approach.

As for the global approach, the residual artifact looks to me significantly more important than the one from the local approach.

Again, I cannot generalize, but I would like to know what you think is the 'best' method.

Thanks,

Marco



Hi Marco,
It is best not to generalize these processes since each data set may be different. The most artifact correction will occur using the original, unaltered data. However, you will need to make the decision that works best for you based on the tradeoffs of changing your pipeline and re-organizing your database. The choice of method can also be affected by the type of analysis you are doing after the cleaning. If you are doing evoked response, than some artifact will be averaged out just by averaging the epochs. If you are doing single trial or resting state analysis, then you will want to be more aggressive with artifact rejection.

Beth

Dear Brainstorm Masters,

I am eventually opting for computing the SSPs on all (already coregistered) runs together, but with the option of changing the frequency range for SSP computation. To do this, I first extract the blink-related epochs from all my runs, and then compute the ‘generic’ SSP (that allows me to change the frequency range) on them. Concerning this point, I have two additional questions:

  1. How does the SSP computation works when choosing a low frequency range (say 1.5-6 Hz) and with short blink-related epochs (-200 200 ms)? Isn’t it a far too short time window for those frequencies?

  2. I have extracted longer blink-related epochs (-1 1 s), and I would have liked to see the results of SSP computation on different time intervals within the epoch. I have tried to modify the time range (‘eventtime’) in process_ssp, but the results were always the same. Which parameter should I modify to change the time interval?

Thanks,

Marco

Hi Marco,
1). All ‘blink’ events are extracted around the specified time window [-200,200]ms and concatenated, giving a longer time series. Then the PCA is performed on this longer time series. There are specific checks in the process to ensure you have enough time samples and you would receive a warning if this was too short.

2). For this, you would need to run the SSP process on different time windows (use the custom SSP process). When specifying a time window, remember the times are relative to the event marker (i.e. blink). So for computing different time windows around your blink, you could use, say [-1,-0.5]s, [-0.5,0], [0,0.5],[0.5,1], etc. You do not need to extract the event window yourself, this is done for you by the process.

Beth

Thanks Beth for the quick reply.

[QUOTE=ebock;6278]Hi Marco,
1). All ‘blink’ events are extracted around the specified time window [-200,200]ms and concatenated, giving a longer time series. Then the PCA is performed on this longer time series. There are specific checks in the process to ensure you have enough time samples and you would receive a warning if this was too short.
[/QUOTE]

Ok that’s clear.

[QUOTE=ebock;6278]
2). For this, you would need to run the SSP process on different time windows (use the custom SSP process). When specifying a time window, remember the times are relative to the event marker (i.e. blink). So for computing different time windows around your blink, you could use, say [-1,-0.5]s, [-0.5,0], [0,0.5],[0.5,1], etc. You do not need to extract the event window yourself, this is done for you by the process.
[/QUOTE]

Ok, what I have done is to put the blink-related epochs in the process1 window and compute the generic SSP from there, but in that case there is no option of time window. Is the whole window selected as default? Anyway, now I understand from your answer that I can instead put all the continuous runs in the process1 window and in that case I can vary the time window.

Yes, that is true, when using the SSP process on imported, epoched data, the entire epoch is used. Therefore you will want to import only the window of interest for the SSP or perform these operations on the continuous raw data, as you suggested.
Beth