Neural Codes
neural codes
Neural codes for natural behaviors in the hippocampus of flying bat
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Neural Codes for Natural Behaviors in Flying Bats
This talk will focus on the importance of using natural behaviors in neuroscience research – the “Natural Neuroscience” approach. I will illustrate this point by describing studies of neural codes for spatial behaviors and social behaviors, in flying bats – using wireless neurophysiology methods that we developed – and will highlight new neuronal representations that we discovered in animals navigating through 3D spaces, or in very large-scale environments, or engaged in social interactions. In particular, I will discuss: (1) A multi-scale neural code for very large environments, which we discovered in bats flying in a 200-meter long tunnel. This new type of neural code is fundamentally different from spatial codes reported in small environments – and we show theoretically that it is superior for representing very large spaces. (2) Rapid modulation of position × distance coding in the hippocampus during collision-avoidance behavior between two flying bats. This result provides a dramatic illustration of the extreme dynamism of the neural code. (3) Local-but-not-global order in 3D grid cells – a surprising experimental finding, which can be explained by a simple physics-inspired model, which successfully describes both 3D and 2D grids. These results strongly argue against many of the classical, geometrically-based models of grid cells. (4) I will also briefly describe new results on the social representation of other individuals in the hippocampus, in a highly social multi-animal setting. The lecture will propose that neuroscience experiments – in bats, rodents, monkeys or humans – should be conducted under evermore naturalistic conditions.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?
There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.
Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise
Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.
Behavioral and neurobiological mechanisms of social cooperation
Human society operates on large-scale cooperation and shared norms of fairness. However, individual differences in cooperation and incentives to free-riding on others’ cooperation make large-scale cooperation fragile and can lead to reduced social-welfare. Deciphering the neural codes representing potential rewards/costs for self and others is crucial for understanding social decision-making and cooperation. I will first talk about how we integrate computational modeling with functional magnetic resonance imaging to investigate the neural representation of social value and the modulation by oxytocin, a nine-amino acid neuropeptide, in participants evaluating monetary allocations to self and other (self-other allocations). Then I will introduce our recent studies examining the neurobiological mechanisms underlying intergroup decision-making using hyper-scanning, and share with you how we alter intergroup decisions using psychological manipulations and pharmacological challenge. Finally, I will share with you our on-going project that reveals how individual cooperation spreads through human social networks. Our results help to better understand the neurocomputational mechanism underlying interpersonal and intergroup decision-making.
Neural codes in early sensory areas maximize fitness
It has generally been presumed that sensory information encoded by a nervous system should be as accurate as its biological limitations allow. However, perhaps counter intuitively, accurate representations of sensory signals do not necessarily maximize the organism’s chances of survival. We show that neural codes that maximize reward expectation—and not accurate sensory representations—account for retinal responses in insects, and retinotopically-specific adaptive codes in humans. Thus, our results provide evidence that fitness-maximizing rules imposed by the environment are applied at the earliest stages of sensory processing.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Locally-ordered representation of 3D space in the entorhinal cortex
When animals navigate on a two-dimensional (2D) surface, many neurons in the medial entorhinal cortex (MEC) are activated as the animal passes through multiple locations (‘firing fields’) arranged in a hexagonal lattice that tiles the locomotion-surface; these neurons are known as grid cells. However, although our world is three-dimensional (3D), the 3D volumetric representation in MEC remains unknown. Here we recorded MEC cells in freely-flying bats and found several classes of spatial neurons, including 3D border cells, 3D head-direction cells, and neurons with multiple 3D firing-fields. Many of these multifield neurons were 3D grid cells, whose neighboring fields were separated by a characteristic distance – forming a local order – but these cells lacked any global lattice arrangement of their fields. Thus, while 2D grid cells form a global lattice – characterized by both local and global order – 3D grid cells exhibited only local order, thus creating a locally ordered metric for space. We modeled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D – thus describing both 2D and 3D grid cells using one unifying model. Together, these data and model illuminate the fundamental differences and similarities between neural codes for 3D and 2D space in the mammalian brain.
Inhibitory neural circuit mechanisms underlying neural coding of sensory information in the neocortex
Neural codes, such as temporal codes (precisely timed spikes) and rate codes (instantaneous spike firing rates), are believed to be used in encoding sensory information into spike trains of cortical neurons. Temporal and rate codes co-exist in the spike train and such multiplexed neural code-carrying spike trains have been shown to be spatially synchronized in multiple neurons across different cortical layers during sensory information processing. Inhibition is suggested to promote such synchronization, but it is unclear whether distinct subtypes of interneurons make different contributions in the synchronization of multiplexed neural codes. To test this, in vivo single-unit recordings from barrel cortex were combined with optogenetic manipulations to determine the contributions of parvalbumin (PV)- and somatostatin (SST)-positive interneurons to synchronization of precisely timed spike sequences. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (<12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contribution to feedforward and feedback inhibition, respectively. Overall, these results show that PV and SST interneurons have distinct frequency (rate code)-selective roles in dynamically gating the synchronization of spike times (temporal code) through preferentially recruiting feedforward and feedback inhibitory circuit motifs. The inhibitory neural circuit mechanisms we uncovered here his may have critical roles in regulating neural code-based somatosensory information processing in the neocortex.
Rational thoughts in neural codes
First, we describe a new method for inferring the mental model of an animal performing a natural task. We use probabilistic methods to compute the most likely mental model based on an animal’s sensory observations and actions. This also reveals dynamic beliefs that would be optimal according to the animal’s internal model, and thus provides a practical notion of “rational thoughts.” Second, we construct a neural coding framework by which these rational thoughts, their computational dynamics, and actions can be identified within the manifold of neural activity. We illustrate the value of this approach by training an artificial neural network to perform a generalization of a widely used foraging task. We analyze the network’s behaviour to find rational thoughts, and successfully recover the neural properties that implemented those thoughts, providing a way of interpreting the complex neural dynamics of the artificial brain. Joint work with Zhengwei Wu, Minhae Kwon, Saurabh Daptardar, and Paul Schrater.
Inferring neural codes from natural behavior in fruit fly social communication
COSYNE 2023
A Large Dataset of Macaque V1 Responses to Natural Images Revealed Complexity in V1 Neural Codes
COSYNE 2023
Learning accurate models of very large neural codes with shallow adaptive random projection networks
COSYNE 2025