Resources
Authors & Affiliations
Adam Gosztolai, Robert Peach, Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheyst
Abstract
Computations in the brain and artificial neural networks can be understood as dynamical processes conformed by the activity of large neural populations. Yet revealing the structure of the underpinning latent dynamical processes from data and interpreting their relevance in computational tasks remains a fundamental challenge. Recent evidence suggests that task-relevant neural activity takes place on low-dimensional subspaces of the state space called neural manifolds. However, there is a lack of theoretical frameworks for the unsupervised representation of neural dynamics that are interpretable based on behavioural variables, comparable across systems, and decodable to behaviour with high accuracy.We introduce MARBLE, a fully unsupervised representation-learning framework for non-linear dynamical systems. Our approach uses geometric deep learning to decompose neural trajectories during a set of trials into distributions of local flow fields (LFFs). Our viewpoint uniquely permits unsupervised learning and the comparison of neural computations across networks and animals. We show that MARBLE offers a well-defined similarity metric between different neural systems to compare computations and detect fine-grained changes in dynamics due to task variables, e.g., decision thresholds and gain modulation. Being unsupervised, MARBLE is uniquely suited to biological discovery. We show that it discovers more interpretable neural representations in several motor, navigation and cognitive tasks than generative models such as LFADS or (semi-)supervised models such as CEBRA. Intriguingly, this interpretability implies significantly higher decoding performance than state-of-the-art. Our results suggest that using the manifold structure yields a new class of algorithms with higher performance and the ability to assimilate data across experiments.