Latent Variables
latent variables
Extracting computational mechanisms from neural data using low-rank RNNs
An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.
Universal function approximation in balanced spiking networks through convex-concave boundary composition
The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.
Sex Differences in Learning from Exploration
Sex-based modulation of cognitive processes could set the stage for individual differences in vulnerability to neuropsychiatric disorders. While value-based decision making processes in particular have been proposed to be influenced by sex differences, the overall correct performance in decision making tasks often show variable or minimal differences across sexes. Computational tools allow us to uncover latent variables that define different decision making approaches, even in animals with similar correct performance. Here, we quantify sex differences in mice in the latent variables underlying behavior in a classic value-based decision making task: a restless two-armed bandit. While male and female mice had similar accuracy, they achieved this performance via different patterns of exploration. Male mice tended to make more exploratory choices overall, largely because they appeared to get ‘stuck’ in exploration once they had started. Female mice tended to explore less but learned more quickly during exploration. Together, these results suggest that sex exerts stronger influences on decision making during periods of learning and exploration than during stable choices. Exploration during decision making is altered in people diagnosed with addictions, depression, and neurodevelopmental disabilities, pinpointing the neural mechanisms of exploration as a highly translational avenue for conferring sex-modulated vulnerability to neuropsychiatric diagnoses.
Exploring feedforward and feedback communication between visual cortical areas with DLAG
Technological advances have increased the availability of recordings from large populations of neurons across multiple brain areas. Coupling these recordings with dimensionality reduction techniques, recent work has led to new proposals for how populations of neurons can send and receive signals selectively and flexibly. Advancement of these proposals depends, however, on untangling the bidirectional, parallel communication between neuronal populations. Because our current data analytic tools struggle to achieve this task, we have recently validated and presented a novel dimensionality reduction framework: DLAG, or Delayed Latents Across Groups. DLAG decomposes the time-varying activity in each area into within- and across-area latent variables. Across-area variables can be decomposed further into feedforward and feedback components using automatically estimated time delays. In this talk, I will review the DLAG framework. Then I will discuss new insights into the moment-by-moment nature of feedforward and feedback communication between visual cortical areas V1 and V2 of macaque monkeys. Overall, this work lays the foundation for dissecting the dynamic flow of signals across populations of neurons, and how it might change across brain areas and behavioral contexts.
Dimensions of variability in circuit models of cortex
Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.