Is there a way through the GUI to change the desired CTF noise compensation gradient order (F.prop.destCtfComp)?
Strangely, "undo 4D/CTF noise compensation" creates an active projector, but it doesn't take into account currCtfComp (other than issuing a warning) nor sets destCtfComp to 0.
Of course, it's easy to do in a script or by exporting/importing to Matlab. I just wanted to double check.
process_undo_megrefcoef transforms the data to look as if the compensation was not applied in Brainstorm, but it does not change the currCtfComp/destCtfComp which are low-level parameters for CTF/4D files. These values and handled only in the low-level reading functions of the CTF/4D, they are not parameters you can modify in Brainstorm.
Can you please describe completely the use case in which you would need to modify the field sFile.prop.destCtfComp ?
It may not be a very common use case, but basically, I'm looking at empty room recordings to identify (and possibly clean with SSP) some environmental artefacts. These are often easier to see and characterize when looking at the raw data. I've also used other gradient orders than 3 in the past when looking at weak or deep sources. In some cases, 2nd gradient gave better SNR.
My current empty room datasets are actually saved as raw, but in Brainstorm, destCtfComp is 3. Even though currCtfComp is 0, it seems destCtfComp gets applied when running the psd process, probably other processes as well, without informing the user. On the other hand, we have the option to apply it or not when importing data.
I tried process_undo_megrefcoef again and it works for getting the raw psd, but this is confusing. It creates a projector to undo the destCtfComp, but the latter may or may not get applied depending on what we do. And it gives a warning when currCtfComp is 0 telling us we may not need to apply it because the data is raw.
I think it would make more sense to have this work in a similar way (for the user) to "convert to continuous". I don't see a reason for creating a projector since Brainstorm already deals with this in the background (with curr/destCtfComp). We just need a way to change dest to what the user wants.
I think it would make more sense to have this work in a similar way (for the user) to "convert to continuous".
You can create a new process to do this if you want, but please do not add it to the popup menu (when right-clicking on the file), as you might be the only person to ever use this.
I don't see a reason for creating a projector since Brainstorm already deals with this in the background (with curr/destCtfComp). We just need a way to change dest to what the user wants.
The undo process was designed specifically to revert projectors that were already applied, without modifying the original file.
For any modification you make, please make sure it keeps on working with 4D data as well.
Just so I understand, do you mean something different than say, data that's already saved in 3rd gradient (currCtfComp=3) and you want to revert to raw? Wouldn't simply setting destCtfComp to 0 do that? Or do you mean that it's meant for imported data where 3rd was applied when importing for example (but I think in that case the comp coefficients might have been removed from the channel file)?
This process was designed specifically to handle some problems we found with the HCP-MEG 4D recordings, back in 2017. The packaging of this data was a mess, it was not very clear whether the 3rd order gradient correction was correctly done. For testing it, I added a solution to undo them, to access directly the raw recordings... I copy below all the email discussions I could find about this dataset, for future reference.
If you need something different, that you think can be useful for the general user of a CTF system, then you might need to design something new, or modify this process to work in a better way.
Since process_undo_megrefcoef.m was written for debugging purposes, I guess it is currently not used anywhere, feel free to alter the way it works.
I added a new process "Artifacts > Undo 4D/CTF noise compensation" to undo the effect of the noise cancellation. It adds the inverse of the compensation as a new spatial projector.
However, I don't think it can be what the HCP people have done. Without this compensation, the recordings are really noisy...
I guess what JM tried to explain was that they removed it from the forward model computation, but kept it in the recordings. But it is not what we would understand by looking at the FieldTrip structures... There were some non-documented manual steps done here...
John M: Do you know why this is the case? Is this really what we want to do here?
Esther F: Note that you can compute the forward models of FieldTrip from Brainstorm (process Sources > FieldTrip: ft_prepare_leadfield).
John M, 2017/09/01:
I'm Cc'ing Matti on this, and I trimmed the email discussion to just the 4D reference problem.
From my conversation with Matti the other day, we were trying to recall our issues with the 4D reference channels. Matti can chime in here, but in the end, we essentially could not extract the actual locations and orientations of the reference channels from the data file itself, and Matti hard-coded in some auxiliary information we had (from a 4D manual, presumably) as to where the sensors were in the VA Minneapolis 4D system.
4D was particularly possessive of their reference weights and scheme, but from what I could tell, it was a fairly basic set of weights, nothing magical. The issue for forward modeling, however, is that strong neural signals can get into these references, and therefore to build a good forward model, you really do need to know the sensor characteristics and model them correctly. Sylvain and I worked this out many years ago for the CTF system, when we concluded that we must correctly account for reference sensors in the forward model, if we are to include the reference channels. There otherwise wasn't a simple shortcut.
When Ed Flynn and I did the installation checkout of the 4D system at the VA in Minneapolis (circa 2000), the 4D instructions for the phantom testing were clear to disable the reference sensors prior to using their source modeling and confirming the phantom dipole location. I inferred that the 4D system was also not correctly accounting for the impact of phantom signals routed through the reference sensors.
So what to do today?
The simple answer is to not use the reference channels in post-processing, relying instead on noise covariance matrices generated only from the head channels. Yes, the raw data will now look substantially noisier, but if the noise covariance is calculated well, then you can use the "Model Evaluation -> Save Whitened Recordings" (found in the Source submenus) to confirm that the pre-whitened data indeed look quite reasonable. (My suggestion would be to make this Whitened Recording viewing even simpler in Brainstorm, without having to first run a source model).
Basically, the reference weighting of the 4D and the CTF, and the SSP projection of the Vectorview, are all designed to make real-time viewing of data that appears less noisy. In post-processing, we have the luxury of more properly calculating baseline statistics and fully pre-whitening the data in preparation for source modeling. Without the details of the 4D reference channels, it's a minor shame we can't include them in the models.
And I haven't addressed the particulars of what the Human Connectome folks did to their 4D data, but they must have run into similar problems.
Francois T, 2017/09/09:
What is done now by default in Brainstorm (and what I understood they did in the HCP-MEG release):
the MEG recordings ARE cleaned with using the digital weights (the noise compensation is typically applied directly on the recordings by the acquisition system, not straightforward to revert)
the corresponding mixing matrix is IGNORED in the computation of the forward model. Is this what you want to do?
To undo the noise compensation and get back to "raw" recordings, we need to compute the inverse of the matrix (field "MegRefCoef" in Brainstorm) and apply it to the recordings explicitly.
I added a process to do this (Artifacts > Undo 4D/CTF noise compensation): it adds this inverse matrix to the Projector field, which get applied dynamically to the MEG recordings like the ICA/SSP projectors.
Recordings are very difficult to review after. And it is possible only if the references have been recorded (not always the case).
For the Jülich data, I remember that the online noise compensation was simply ignored for source modeling. There was also a problem in reading the weights (the numbers were not accurate / "broken"), the compensation matrices were hence not even available. For that reason I did not even at that time finish the IO to read them as it was impossible to see if I got it right or not.
For what I did in MNE-HCP, I just kept that habit and pretended that no online compensation was applied. I added some functionality though to regress out the compensation channels which seems to reduce noise.
I am wondering if you had similar experiences and some insights into which gaps may exist here.
Francois T, 2018/03/05:
I went through this last summer and came to the exact same conclusions.
Now in Brainstorm, the compensation matrices are read from the 4D files but not used when estimating the sources.
I met with JM Badier last week, he told me that they finally understood one of the reasons for which it was such a mess: there is a bug in the 4D calibration software, leading the positions of the reference sensors not to be updated properly in the 4D config file. This leads the forward model of the references to be noticeably wrong, and therefore using them leads to weird source results.
This was experienced by several people (including the HCP people), and because it was not working well but without any clear reasons, the issue was not reported properly and simply "ignored" (either by reverting to uncompensated data or by not using the references in the source estimation, or maybe other alternative solutions). In the MEG-HCP distribution, it looks like this issue was not "ignored" in the same way for all the subjects, making any automated import of the pre-processed HCP data very difficult.
Denis E, 2018/03/06:
I went through this last summer and came to the exact same conclusions.
Now in Brainstorm, the compensation matrices are read from the 4D files but not used when estimating the sources.
Ok. This is good to know. We may want to compare some outputs between Brainstorm and MNE at some point.
I met with JM Badier last week, he told me that they finally understood one of the reasons for which it was such a mess: there is a bug in the 4D calibration software, leading the positions of the reference sensors not to be updated properly in the 4D config file. This leads the forward model of the references to be noticeably wrong, and therefore using them leads to weird source results.
"Great" to hear that. This sounds pretty much like the story I knew from Jülich. I am somewhat relieved to learn that this seems to be a far more common issue. On the other hand, just ignoring the ref channels for source modeling seems to yield ok results. At least this has always been my impression.
This was experienced by several people (including the HCP people), and because it was not working well but without any clear reasons, the issue was not reported properly and simply "ignored" (either by reverting to uncompensated data or by not using the references in the source estimation, or maybe other alternative solutions). In the MEG-HCP distribution, it looks like this issue was not "ignored" in the same way for all the subjects, making any automated import of the pre-processed HCP data very difficult.
Yes. You see that on my example the evoked responses diverge quite extremely between the 2 different official HCP outputs. I find this a bit worrisome.
I have the impression that some extra noise suppression goes in the evoked responses that is not clearly documented. Do you have any idea what they did to clean the data apart from filtering, rejecting bad segments, correcting by ref channels & doing ICA?
I'll probably start by adding a simpler process for CTF and perhaps change the few places where 3rd gradient is hard coded to use destCtfComp instead, which will remain as 3rd order by default.