Sensory Systems
sensory systems
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Probabilistic simplicity in the study of sensory systems
Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing
Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
On the contributions of retinal direction selectivity to cortical motion processing in mice
Cells preferentially responding to visual motion in a particular direction are said to be direction-selective, and these were first identified in the primary visual cortex. Since then, direction-selective responses have been observed in the retina of several species, including mice, indicating motion analysis begins at the earliest stage of the visual hierarchy. Yet little is known about how retinal direction selectivity contributes to motion processing in the visual cortex. In this talk, I will present our experimental efforts to narrow this gap in our knowledge. To this end, we used genetic approaches to disrupt direction selectivity in the retina and mapped neuronal responses to visual motion in the visual cortex of mice using intrinsic signal optical imaging and two-photon calcium imaging. In essence, our work demonstrates that direction selectivity computed at the level of the retina causally serves to establish specialized motion responses in distinct areas of the mouse visual cortex. This finding thus compels us to revisit our notions of how the brain builds complex visual representations and underscores the importance of the processing performed in the periphery of sensory systems.
Context-dependent motion processing in the retina
A critical function of sensory systems is to reliably extract ethologically relevant features from the complex natural environment. A classic model to study feature detection is the direction-selective circuit of the mammalian retina. In this talk, I will discuss our recent work on how visual contexts dynamically influence the neural processing of motion signals in the direction-selective circuit in the mouse retina.
Why do some animals have more than two eyes?
The evolution of vision revolutionised animal biology, and eyes have evolved in a stunning array of diverse forms over the past half a billion years. Among these are curious duplicated visual systems, where eyes can be spread across the body and specialised for different tasks. Although it sounds radical, duplicated vision is found in most major groups across the animal kingdom, but remains poorly understood. We will explore how and why animals collect information about their environment in this unusual way, looking at examples from tropical forests to the sea floor, and from ancient arthropods to living jellyfish. Have we been short-changed with just two eyes? Dr Lauren Sumner-Rooney is a Research Fellow at the OUMNH studying the function and evolution of animal visual systems. Lauren completed her undergraduate degree at Oxford in 2012, and her PhD at Queen’s University Belfast in 2015. She worked as a research technician and science communicator at the Royal Veterinary College (2015-2016) and held a postdoctoral research fellowship at the Museum für Naturkunde, Berlin (2016-2017) before arriving at the Museum in 2017.
Transcriptional adaptation couples past experience and future sensory responses
Animals traversing different environments encounter both stable background stimuli and novel cues, which are generally thought to be detected by primary sensory neurons and then distinguished by downstream brain circuits. Sensory adaptation is a neural mechanism that filters background by minimizing responses to stable sensory stimuli, and a fundamental feature of sensory systems. Adaptation over relatively fast timescales (milliseconds to minutes) have been reported in many sensory systems. However, adaptation to persistent environmental stimuli over longer timescales (hours to days) have been largely unexplored, even though those timescales are ethologically important since animals typically stay in one environment for hours. I showed that each of the ~1,000 olfactory sensory neuron (OSN) subtypes in the mouse harbors a distinct transcriptome whose content is precisely determined by interactions between its odorant receptor and the environment. This transcriptional variation is systematically organized to support sensory adaptation: expression levels of many genes relevant to transforming odors into spikes continuously vary across OSN subtypes, dynamically adjust to new environments over hours, and accurately predict acute OSN-specific odor responses. The sensory periphery therefore separates salient signals from predictable background via a transcriptional mechanism whose moment-to-moment state reflects the past and constrains the future; these findings suggest a general model in which structured transcriptional variation within a cell type reflects individual experience.
Retinal responses to natural inputs
The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.
Invariant neural subspaces maintained by feedback modulation
Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.
Hearing in an acoustically varied world
In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Target detection in the natural world
Animal sensory systems are optimally adapted to those features typically encountered in natural surrounds, thus allowing neurons that have a limited bandwidth to encode almost impossibly large input ranges. Importantly, natural scenes are not random, and peripheral visual systems have therefore evolved to reduce the predictable redundancy. The vertebrate visual cortex is also optimally tuned to the spatial statistics of natural scenes, but much less is known about how the insect brain responds to these. We are redressing this deficiency using several techniques. Olga Dyakova uses exquisite image manipulation to give natural images unnatural image statistics, or vice versa. Marissa Holden then uses these images as stimuli in electrophysiological recordings of neurons in the fly optic lobes, to see how the brain codes for the statistics typically encountered in natural scenes, and Olga Dyakova measures the behavioral optomotor response on our trackball set-up.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
The Evolution of Looking and Seeing: New Insights from Colorful Jumping Spiders
During communication, alignment between signals and sensors can be critical. Signals are often best perceived from specific angles, and sensory systems can also exhibit strong directional biases. However, we know little about how animals establish and maintain such signaling alignment during communication. To investigate this, we characterized the spatial dynamics of visual courtship signal- ing in the jumping spider Habronattus pyrrithrix. The male performs forward-facing displays involving complex color and movement patterns, with distinct long- and short-range phases. The female views displays with 2 distinct eye types and can only perceive colors and fine patterns of male displays when they are presented in her frontal field of view. Whether and how courtship interactions pro- duce such alignment between male display and female field of view is unknown. We recorded relative positions and orientations of both actors throughout courtship and established the role of each sex in maintaining signaling alignment. Males always oriented their displays toward the female. However, when females were free to move, male displays were consistently aligned with female princi- pal eyes only during short-range courtship. When female position was fixed, signaling alignment consistently occurred during both phases, suggesting that female movement reduces communication efficacy. When female models were experimentally rotated to face away during courtship, males rarely repositioned themselves to re-align their display. However, males were more likely to present cer- tain display elements after females turned to face them. Thus, although signaling alignment is a function of both sexes, males appear to rely on female behavior for effective communication
Long-term effects of diet-induced obesity on gut-brain communication
Rapid communication between the gut and the brain about recently consumed nutrients is critical for regulating food intake and maintaining energy homeostasis. We have shown that the infusion of nutrients directly into the gastrointestinal tract rapidly inhibits hunger-promoting AgRP neurons in the arcuate nucleus of the hypothalamus and suppresses subsequent feeding. The mechanism of this inhibition appears to be dependent upon macronutrient content, and can be recapitulated by a several hormones secreted in the gut in response to nutrient ingestion. In high-fat diet-induced obese mice, the response of AgRP neurons to nutrient-related stimuli are broadly attenuated. This attenuation is largely irreversible following weight loss and may represent a mechanism underlying difficulty with weight loss and propensity for weight regain in obesity.
An evolutionarily conserved hindwing circuit mediates Drosophila flight control
My research at the interface of neurobiology, biomechanics, and behavior seeks to understand how the timing precision of sensory input structures locomotor output. My lab studies the flight behavior of the fruit fly, Drosophila melanogaster, combining powerful genetic tools available for labeling and manipulating neural circuits with cutting-edge imaging in awake, behaving animals. This work has the potential to fundamentally reshape understanding of the evolution of insect flight, as well as highlight the tremendous importance of timing in the context of locomotion. Timing is crucial to the nervous system. The ability to rapidly detect and process subtle disturbances in the environment determines whether an animal can attain its next meal or successfully navigate complex, unpredictable terrain. While previous work on various animals has made tremendous strides uncovering the specialized neural circuits used to resolve timing differences with sub-microsecond resolution, it has focused on the detection of timing differences in sensory systems. Understanding of how the timing of motor output is structured by precise sensory input remains poor. My research focuses on an organ unique to fruit flies, called the haltere, that serves as a bridge for detecting and acting on subtle timing differences, helping flies execute rapid maneuvers. Understanding how this relatively simple insect canperform such impressive aerial feats demands an integrative approach that combines physics, muscle mechanics, neuroscience, and behavior. This unique, powerful approach will reveal the general principles that govern sensorimotor processing.
Dynamic computation in the retina by retuning of neurons and synapses
How does a circuit of neurons process sensory information? And how are transformations of neural signals altered by changes in synaptic strength? We investigate these questions in the context of the visual system and the lateral line of fish. A distinguishing feature of our approach is the imaging of activity across populations of synapses – the fundamental elements of signal transfer within all brain circuits. A guiding hypothesis is that the plasticity of neurotransmission plays a major part in controlling the input-output relation of sensory circuits, regulating the tuning and sensitivity of neurons to allow adaptation or sensitization to particular features of the input. Sensory systems continuously adjust their input-output relation according to the recent history of the stimulus. A common alteration is a decrease in the gain of the response to a constant feature of the input, termed adaptation. For instance, in the retina, many of the ganglion cells (RGCs) providing the output produce their strongest responses just after the temporal contrast of the stimulus increases, but the response declines if this input is maintained. The advantage of adaptation is that it prevents saturation of the response to strong stimuli and allows for continued signaling of future increases in stimulus strength. But adaptation comes at a cost: a reduced sensitivity to a future decrease in stimulus strength. The retina compensates for this loss of information through an intriguing strategy: while some RGCs adapt following a strong stimulus, a second population gradually becomes sensitized. We found that the underlying circuit mechanisms involve two opposing forms of synaptic plasticity in bipolar cells: synaptic depression causes adaptation and facilitation causes sensitization. Facilitation is in turn caused by depression in inhibitory synapses providing negative feedback. These opposing forms of plasticity can cause simultaneous increases and decreases in contrast-sensitivity of different RGCs, which suggests a general framework for understanding the function of sensory circuits: plasticity of both excitatory and inhibitory synapses control dynamic changes in tuning and gain.
The active modulation of sound and vibration perception
The dominant view of perception right now is that information travels from the environment to the sensory system, then to the nervous systems which processes it to generate a percept and behaviour. Ongoing behaviour is thought to occur largely through simple iterations of this process. However, this linear view, where information flows only in one direction and the properties of the environment and the sensory system remain static and unaffected by behaviour, is slowly fading. Many of us are beginning to appreciate that perception is largely active, i.e. that information flows back and forth between the three systems modulating their respective properties. In other words, in the real world, the environment and sensorimotor loop is pretty much always closed. I study the loop; in particular I study how the reverse arm of the loop affects sound and vibration perception. I will present two examples of motor modulation of perception at two very different temporal and spatial scales. First, in crickets, I will present data on how high-speed molecular motor activity enhances hearing via the well-studied phenomenon of active amplification. Second, in spiders I will present data on how body posture, a slow macroscopic feature, which can barely be called ‘active’, can nonetheless modulate vibration perception. I hope these results will motivate a conversation about whether ‘active’ perception is an optional feature observed in some sensory systems, or something that is ultimately necessitated by both evolution and physics.
Efficient nonlinear receptive field estimation across processing stages of sensory systems
Bernstein Conference 2024