I think I’ve run into a bug in the Bivariate Granger Causality (NxN) routine and I hope you can help me with it. I’m pulling scout time series from a 2-second data segment, but I notice that sometimes one or more scouts return a series of all zeros. As soon as the GC code reaches that channel(s), it stops and fills the entire causality matrix with zeros. I have other examples in my analysis where this happens; the figure attached is just one case.
In my view, the calculation should skip or handle the zero series and keep going. Only the rows and columns for frontalpole R, for example, should be zero.
Just following up on this issue I raised regarding the Bivariate Granger Causality (NxN) routine and how it handles zero time series. I understand you may be busy, but I’d appreciate any thoughts or updates when you have a chance.
Let me know if I can provide anything else to help.
Thanks for the shared data, it was useful to identify the bug that you reported.
This was not really related to the GC itself, but to the fact that signals are standardized (mean 0, std 1) before computing GC, so, flat channels were giving troubles.
Another issue that raised in the data, is that sometimes the ratio restricted_variance / unrestricted_variance was smaller than 1, which suggest that the model that considered x and y performed worse than the model that considers only x. In theory this should not be the case, it seems it is related to numerical issues, as in the cases this happens the ratio (restricted_variance / unrestricted_variance) - 1 is in the order of 10^-15.
The fixes for this are located in these links, these lines of coded are not in the main Brainstorm code, as we need to do some more testing.
Thank you so much for the huge effort and commitment in solving this issue. I’ll start working on this fix right away. Really appreciate the great work!