ePoster
Learning input-driven dynamics from neural recordings
Marine Schimeland 3 co-authors
COSYNE 2022 (2022)
Lisbon, Portugal
Presentation
Date TBA
Event Information
Poster
View posterAbstract
Large scale neural recordings are typically found to embed lower-dimensional structure that reflects behaviour. Empirically, the best models of such low-dimensional structure -- by several statistical measures -- tend to be those that describe neural recordings via a latent dynamical system. Importantly, as recordings are typically made in one or a subset of areas within the brain, the dynamics that best capture the data cannot in general be expected to be fully autonomous, but may instead be driven by unobserved inputs. Learning the parameters of a probabilistic dynamical system whilst simultaneously inferring any unobserved inputs is a difficult and somewhat ill-posed problem.Here, we propose a new method to tackle this that harnesses recent developments in differentiable control and faithfully recovers ground-truth dynamics in a range of synthetic input-driven systems. Similarly to Pandarinath et al. (2018) we formulate our model as a variational auto-encoder where the generator is an input-driven recurrent neural network (RNN).
However, instead of using yet other RNNs to parametrize the decoder, we perform amortized inference using iLQR, a powerful nonlinear controller that finds the set of inputs most likely to have given rise to the data. This greatly reduces the number of (hyper)-parameters in our model, thus facilitating learning. Moreover, iLQR enables flexible inference on trials of varying duration and population size with no further modifications, which was difficult with previous RNN-based decoders. We demonstrate the utility of our method on several synthetic and real datasets. We first show that it can successfully learn the dynamics of a variety of low-dimensional systems. Next, we validate it on two sets of neural recordings from monkey M1 during reaching tasks, and dissect the dynamics and inputs inferred in both cases.