Future plans for changes in source output options: tensor decomposition?

While perusing a different question related to pTE on sources – and seeing the point made about PCA obviously dulling down high-variance activity in the time-frequency domain – I wanted to ask if there is any plans/consideration to include a tensor decomposition approach as an output option for EEG dipole sources? For example, such an approach could be applied simultaneously across dipoles (within a source), time, frequency, and even trials. Some example I thought of include the CPD or tucker decomposition. Such an approach would have distinct advantages of PCA. One advantage is decomposing the trajectories while accounting for hierarchical covariance, another is the ability to include non-negativity constraints. There are several other, still simpler, multivariate approaches as well that might circumvent some issues with PCA, including generalized eigenvalue decomposition.

Sounds good indeed.
Would you like to contribute some development/testing of the idea? That’d be great!
You could start with a few pointers in the literature or existing code we could test together.

What do you think?


I would be happy to contribute; I can’t guarantee how fast I will be.

Here is some relevant (I think) literature for starters:
(1) Cong, F., Lin, Q. H., Kuang, L. D., Gong, X. F., Astikainen, P., & Ristaniemi, T. (2015). Tensor decomposition of EEG signals: a brief review. Journal of neuroscience methods, 248, 59-69.

Single Neuron based
(2) : Seely JS, Kaufman MT, Ryu SI, Shenoy KV, Cunningham JP, et al. (2016) Tensor Analysis Reveals Distinct Population Structure that Parallels the Different Computational Roles of Areas M1 and V1. PLOS Computational Biology 12(11):

(3): Williams, A. H., Kim, T. H., Wang, F., Vyas, S., Ryu, S. I., Shenoy, K. V., … & Ganguli, S. (2018). Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis. Neuron.

Other methods worth considering, include:
(1) Gaussian-process latent variable models (these can be both flat or hierarchical), and although it uses PCA, it is not subject to linearity as is PCA. Thus, it can extract nonlinear mappings between estimated modes (or latent states) and observed data.
(2) Generalized eigenvalue decomposition (this is the most straightforward approach in Matlab, and doesn’t ‘force’ orthogonality like PCA). Probably the easiest to implement out of the box.

After some discussion, we could consider what methods and to what types of data would be best to trial some of these on.

Justin Fine

My email:

Thank you, Justin: I will review the tensor decomposition approach and will get back to you: this will be some time in September though.
Do you have a preferred reference concerning Gaussian-process latent variable models in electrophysiology?

Thanks again!


Definitely no rush for me, but here are my passing thoughts on the gaussian process approach
W.r.t. Gaussian-process latent variable models (GPLVM), a close, but not exact reference can be found in:
Gaussian-process factor analysis (GPFA) for low-dimensional single-trial analysis of neural population activity.
With full disclosure, though, GPLVM and GPFA are slightly different in (1) how they deal with noise variances (for example, of a single dipole, our source sites estimated latent variance), and (2) the latter embodies the framework of latent variable modeling while the former (GPLVM) embodies (non-)linear dimensionality reduction. Tensor decomposition is closer to a framework of factor analysis, with added benefit of dimensionality reduction (see the Neuron (2018) paper I posted previously). Thus, the tools are complementary in that the former focuses on dimensionality reduction and has already been applied to hierarchical formulations (e.g., Subjects–>conditions --> trials–> source --> dipoles in source --> time(-frequency) etc…). I have yet to find a reference, but there is no reason a GPFA couldn’t be applied hierarchically as well. Therefore, both tools can be used an experimentally and data constrained method to extract meaningful data while regularizing/smoothing these estimates borrowing from all subjects simultaneously. I think/believe such an approach has the added benefit of removing ambiguity in analysis; for example, when is really meaningful to extract measures of time-frequency power (peak vs. onset)? An issue like this can now be directly constrained without the issue of across subject averaging in time.

Last note: I am currently working on applying the hierarchical tensor decomposition for addressing a question on post-movement beta rebound in sensorimotor learning (15-20 Hz), with a focus on pre- and post-central gyrus. The aim is to distinguish that beta which seems specific to each area, and that which is shared (a cheap measure of connectivity, too). I will share these results when available.

Justin Fine.

1 Like

Thanks Justin: we’ll investigate and see whether this can help with our present plans to improve signal extraction in large multivariate source maps.

This will be very advantageous for various types of data, including PAC analysis. Looking forward to this!

1 Like