Efficient Coding
efficient coding
Prof. Alan A. Stocker
The position is part of the ongoing NSF-funded project ‘Choice-induced biases in human decision-making’ in collaboration with the laboratory of Tobias Donner at the University Medical Center Hamburg, Germany. The goal of the project is to understand how decisions influence the memory of past (consistency bias) but also the evaluation of future evidence (confirmation bias) in human decision-making. The project employs a highly interdisciplinary approach that combines psychophysical and functional neuroimaging (MEG) experiments with theory and computational modeling.
Dr. Fleur Zeldenrust
For the Vidi project ‘Top-down neuromodulation and bottom-up network computation,’ we seek a postdoc to study neuromodulators in efficient spike-coding networks. Using our lab’s data on dopamine, acetylcholine, and serotonin from the mouse barrel cortex, you’ll derive models connecting single cells, networks, and behavior. The aim of this project is to explain the effects of neuromodulation on task performance in biologically realistic spiking recurrent neural networks (SRNNs). You will use the efficient spike coding framework, in which a network is not trained by a learning paradigm but deduced using mathematically rigorous rules that enforce efficient coding (i.e. maximally informative spikes). You will study how the network’s structural properties such as neural heterogeneity influence decoding performance and efficiency. You will incorporate realistic network properties of the (barrel) cortex based on our lab’s measurements and incorporate the cellular effects of dopamine, acetylcholine and serotonin we have measured over the past years into the network, to investigate their effects on representations, network activity measures such as dimensionality, and decoding performance. You will build on the single cell data, network models and analysis methods available in our group, and your results will be incorporated into our group’s further research to develop and validate efficient coding models of (somatosensory) perception. Therefore, we are looking for a team player who is willing to learn from the other group members and to share their knowledge with them.
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Signatures of criticality in efficient coding networks
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
Efficient Random Codes in a Shallow Neural Network
Efficient coding has served as a guiding principle in understanding the neural code. To date, however, it has been explored mainly in the context of peripheral sensory cells with simple tuning curves. By contrast, ‘deeper’ neurons such as grid cells come with more complex tuning properties which imply a different, yet highly efficient, strategy for representing information. I will show that a highly efficient code is not specific to a population of neurons with finely tuned response properties: it emerges robustly in a shallow network with random synapses. Here, the geometry of population responses implies that optimality obtains from a tradeoff between two qualitatively different types of error: ‘local’ errors (common to classical neural population codes) and ‘global’ (or ‘catastrophic’) errors. This tradeoff leads to efficient compression of information from a high-dimensional representation to a low-dimensional one. After describing the theoretical framework, I will use it to re-interpret recordings of motor cortex in behaving monkey. Our framework addresses the encoding of (sensory) information; if time allows, I will comment on ongoing work that focuses on decoding from the perspective of efficient coding.
The balance of excitation and inhibition and a canonical cortical computation
Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. The balanced network model provides a self-consistent account of this observation: population rates dynamically adjust to yield a state in which all neurons are active at biological levels, with their E & I inputs tightly balanced. But global tight E/I balance predicts population responses with linear stimulus-dependence and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which only a localized subset of neurons are active and have balanced inputs. We analytically show that in networks of neurons with different stimulus selectivities, the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), which predicts a regime of loose, rather than tight, E/I balance. However, an understanding of the causal relationship between E/I balance and normalization in SSN and conditions under which SSN yields significant sublinear integration are lacking. For weak inputs, SSN integrates inputs supralinearly, while for very strong inputs it approaches a regime of tight balance. We show that when this latter regime is globally balanced, SSN cannot exhibit strong normalization for any input strength; thus, in SSN too, significant normalization requires localized balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation. Time allowing I will also cover our work extending a normative theoretical account of normalization which explains it as an example of efficient coding of natural stimuli. We show that when biological noise is accounted for, this theory makes the same prediction as the SSN: a transition to supralinear integration for weak stimuli.
A Panoramic View on Vision
Statistics of natural scenes are not uniform - their structure varies dramatically from ground to sky. It remains unknown whether these non-uniformities are reflected in the large-scale organization of the early visual system and what benefits such adaptations would confer. By deploying an efficient coding argument, we predict that changes in the structure of receptive fields across visual space increase the efficiency of sensory coding. To test this experimentally, developed a simple, novel imaging system that is indispensable for studies at this scale. In agreement with our predictions, we could show that receptive fields of retinal ganglion cells change their shape along the dorsoventral axis, with a marked surround asymmetry at the visual horizon. Our work demonstrates that, according to principles of efficient coding, the panoramic structure of natural scenes is exploited by the retina across space and cell-types.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Efficient coding and receptive field coordination in the retina
My laboratory studies how the retina processes visual scenes and transmits this information to the brain. We use multi-electrode arrays to record the activity of hundreds of retina neurons simultaneously in conjunction with transgenic mouse lines and chemogenetics to manipulate neural circuit function. We are interested in three major areas. First, we work to understand how neurons in the retina are functionally connected. Second we are studying how light-adaptation and circadian rhythms alter visual processing in the retina. Finally, we are working to understand the mechanisms of retinal degenerative conditions and we are investigating potential treatments in animal models.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Cones with character: An in vivo circuit implementation of efficient coding
In this talk I will summarize some of our recent unpublished work on spectral coding in the larval zebrafish retina. Combining 2p imaging, hyperspectral stimulation, computational modeling and connectomics, we take a renewed look at the spectral tuning of cone photoreceptors in the live eye. We find that already cones optimally rotate natural colour space in a PCA-like fashion to disambiguate greyscale from "colour" information. We then follow this signal through the retinal layers and ultimately into the brain to explore the major spectral computations performed by the visual system at its consecutive stages. We find that by and large, zebrafish colour vision can be broken into three major spectral zones: long wavelength grey-scale-like vision, short-wavelength prey capture circuits, and spectrally diverse mid-wavelength circuits which possibly support the bulk of "true colour vision" in this tetrachromate vertebrate.
Spanning the arc between optimality theories and data
Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.
Coarse-to-fine processing drives the efficient coding of natural scenes in mouse visual cortex
COSYNE 2022
Efficient Coding of Natural Movies Predicts the Optimal Number of Receptive Field Mosaics
COSYNE 2022
On the Context-Dependent Efficient Coding of Olfactory Spaces
COSYNE 2023
Efficient coding explains neural response homeostasis and stimulus-specific adaptation
COSYNE 2023
Efficient coding of chromatic natural images reveals unique hues
COSYNE 2025