Resources
Authors & Affiliations
Arthur Pellegrino,Heike Stein,N Alex Cayco Gajic
Abstract
A fundamental question in systems neuroscience is how populations of neurons represent sensory, motor, and cognitive variables. Yet how these neural representations evolve over slow timescales is not well understood. Recent work has proposed using tensor decomposition methods to identify low-dimensional latent dynamics without requiring trial-averaging (tensor component analysis; TCA). This approach assumes that the data tensor can be described as a sum of components with fixed neural weights and temporal dynamics that vary over trials only in amplitude. However, recent evidence suggests that the slow timescale evolution of latent variables over trials may instead be characterized by a reorganization of neural encoding weights (“representational drift”), or by shifts in temporal dynamics as in classic reinforcement learning paradigms. To address this, we propose a new dimensionality reduction method (sliceTCA) that extends TCA to identify a broader class of latent low-dimensional dynamics by allowing multilinear dependencies between neural, temporal, or trial factors. We first illustrate the flexibility of this method in a simple linear feedforward model receiving bottom-up sensory and top-down modulatory input in a Go/No-go task. We show that sliceTCA is able to capture the structure of the simulated data tensor with only two components representing the two sources of input. We next apply sliceTCA to a multi-region calcium imaging dataset to determine how the latency of task-related latent dynamics changes over the course of a session. These examples illustrate the ability of sliceTCA to capture interpretable low-dimensional structure that evolves over trials from high-dimensional neural data.