Population Coding
population coding
Latest
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Trading Off Performance and Energy in Spiking Networks
Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.
Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations
To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.
Population coding in the cerebellum: a machine learning perspective
The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.
From single cell to population coding during defensive behaviors in prefrontal circuits
Coping with threatening situations requires both identifying stimuli predicting danger and selecting adaptive behavioral responses in order to survive. The dorso medial prefrontal cortex (dmPFC) is a critical structure involved in the regulation of threat-related behaviour, yet it is still largely unclear how threat-predicting stimuli and defensive behaviours are associated within prefrontal networks in order to successfully drive adaptive responses. Over the past years, we used a combination we used a combination of extracellular recordings, neuronal decoding approaches, and state of the art optogenetic manipulations to identify key neuronal elements and mechanisms controlling defensive fear responses. I will present an overview of our recent work ranging from analyses of dedicated neuronal types and oscillatory and synchronization mechanisms to artificial intelligence approaches used to decode the activity or large population of neurons. Ultimately these analyses allowed the identification of high dimensional representations of defensive behavior unfolding within prefrontal networks.
Does human perception rely on probabilistic message passing?
The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.
Dynamical population coding during defensive behaviours in prefrontal circuits
Coping with threatening situations requires both identifying stimuli predicting danger and selecting adaptive behavioral responses in order to survive. The dorso medial prefrontal cortex (dmPFC) is a critical structure involved in the regulation of threat-related behaviour, yet it is still largely unclear how threat-predicting stimuli and defensive behaviours are associated within prefrontal networks in order to successfully drive adaptive responses. To address these questions, we used a combination of extracellular recordings, neuronal decoding approaches, and optogenetic manipulations to show that threat representations and the initiation of avoidance behaviour are dynamically encoded in the overall population activity of dmPFC neurons. These data indicate that although dmPFC population activity at stimulus onset encodes sustained threat representations and discriminates threat- from non-threat cues, it does not predict action outcome. In contrast, transient dmPFC population activity prior to action initiation reliably predicts avoided from non-avoided trials. Accordingly, optogenetic inhibition of prefrontal activity critically constrained the selection of adaptive defensive responses in a time-dependent manner. These results reveal that the adaptive selection of active fear responses relies on a dynamic process of information linking threats with defensive actions unfolding within prefrontal networks.
The properties of large receptive fields as explanation of ensemble statistical representation: A population coding model
no
High precision coding in visual cortex
Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.
Dynamical population coding during defensive behaviours in prefrontal circuits
Coping with threatening situations requires both identifying stimuli predicting danger and selecting adaptive behavioral responses in order to survive. The dorso medial prefrontal cortex (dmPFC) is a critical structure involved in the regulation of threat-related behaviour, yet it is still largely unclear how threat-predicting stimuli and defensive behaviours are associated within prefrontal networks in order to successfully drive adaptive responses. To address these questions, we used a combination of extracellular recordings, neuronal decoding approaches, and optogenetic manipulations to show that threat representations and the initiation of avoidance behaviour are dynamically encoded in the overall population activity of dmPFC neurons. These data indicate that although dmPFC population activity at stimulus onset encodes sustained threat representations and discriminates threat- from non-threat cues, it does not predict action outcome. In contrast, transient dmPFC population activity prior to action initiation reliably predicts avoided from non-avoided trials. Accordingly, optogenetic inhibition of prefrontal activity critically constrained the selection of adaptive defensive responses in a time-dependent manner. These results reveal that the adaptive selection of active fear responses relies on a dynamic process of information linking threats with defensive actions unfolding within prefrontal networks.
Dimensions of variability in circuit models of cortex
Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.
Cortical population coding of consumption decisions
The moment that a tasty substance enters an animal’s mouth, the clock starts ticking. Taste information transduced on the tongue signals whether a potential food will nourish or poison, and the animal must therefore use this information quickly if it is to decide whether the food should be swallowed or expelled. The system tasked with computing this important decision is rife with cross-talk and feedback—circuitry that all but ensures dynamics and between-neuron coupling in neural responses to tastes. In fact, cortical taste responses, rather than simply reporting individual taste identities, do contain characterizable dynamics: taste-driven firing first reflects the substance’s presence on the tongue, and then broadly codes taste quality, and then shifts again to correlate with the taste’s current palatability—the basis of consumption decisions—all across the 1-1.5 seconds after taste administration. Ensemble analyses reveal the onset of palatability-related firing to be a sudden, nonlinear transition happening in many neurons simultaneously, such that it can be reliably detected in single trials. This transition faithfully predicts both the nature and timing of consumption behaviours, despite the huge trial-to-trial variability in both; furthermore, perturbations of this transition interfere with production of the behaviours. These results demonstrate the specific importance of ensemble dynamics in the generation of behaviour, and reveal the taste system to be akin to a range of other integrated sensorimotor systems.
population coding coverage
12 items