Resources
Authors & Affiliations
Kabir Dabholkar, Omri Barak
Abstract
Population recording offer a window into the underlying dynamics of brain regions. Latent Variable Models (LVMs) infer these dynamics treating the observations as projections from the latents. Evaluating the quality of these models remains challenging, as we don't have access to the true underlying dynamics. A common approach, "co-smoothing", involves jointly estimating latent variables from neural data and predicting the firing rates of held-out neurons. However, we find that optimizing this single objective often yields models that include extraneous dynamics not relevant to the underlying neural activity. To address this limitation, we propose a complementary evaluation, "few-shot co-smoothing", which tests a model’s ability to generalize from a few trials. Using both numerical and analytical tools on Hidden Markov Models (HMMs), we show that this metric successfully identifies models with minimal extraneous dynamics. We validate our approach on real neural recordings, comparing two state-of-the-art models: LFADS and STNDT. To this end, we introduce a novel cross-decoding to directly evaluate model \textit{extraneousness} in the absence of ground truth, exploiting variability across LVMs. Together, these findings present a new paradigm that favors LVM simplicity, with latents less distorted by arbitrary extraneous variables -- an essential attribute for obtaining mechanistic explanations of neural processes from population recordings.