Sensory Input
sensory input
sensorimotor control, mouvement, touch, EEG
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Prof Iain Couzin
The application of Virtual Reality (VR) environments allows us to experimentally dissociate social input and responses, opening powerful avenues of inquiry into the dynamics of social influence and the physiological and neural mechanisms of collective behaviour. A key task for the nervous system is to make sense of complex streams of potentially-informative sensory input, allowing appropriate, relatively low-dimensional, motor actions to be taken, sometimes under conditions of considerable time constraint. The student will employ fully immersive ‘holographic’ VR to investigate the behavioural mechanisms by which freely-swimming zebrafish obtain both social and non-social sensory information from their surroundings, and how they use this to inform movement decisions. Immersive VR allows extremely precise control over the appearance, body postural changes, and motion, allowing photorealistic virtual individuals to interact dynamically with unrestrained real animals. Similar to a method that has transformed neuroscience — the dynamic patch clamp paradigm in which inputs to neurons can be based on fast closed-loop measurements of their present behaviour — VR creates the possibility for a ‘dynamic social patch clamp’ paradigm in which we can develop, and interrogate, decision-making models by integrating virtual organisms in the same environment as real individuals. This tool will help us to infer the sensory basis of social influence, the causality of influence in (small) social networks, to provide highly repeatable stimuli (allowing us to evaluate inter-individual and within-individual variation) and to interrogate the feedback loops inherent in social dynamics. For more information see: https://www.smartnets-etn.eu/using-immersive-virtual-reality-vr-to-determine-causal-relationships-in-animal-social-networks/
From Spiking Predictive Coding to Learning Abstract Object Representation
In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Roles of inhibition in stabilizing and shaping the response of cortical networks
Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
The sense of agency as an explorative role in our perception and action
The sense of agency refers to the subjective feeling of controlling one's own behavior and, through them, external events. Why is this subjective feeling important for humans? Is it just a by-product of our actions? Previous studies have shown that the sense of agency can affect the intensity of sensory input because we predict the input from our motor intention. However, my research has found that the sense of agency plays more roles than just predictions. It enhances perceptual processes of sensory input and potentially helps to harvest more information about the link between the external world and the self. Furthermore, our recent research found both indirect and direct evidence that the sense of agency is important for people's exploratory behaviors, and this may be linked to proximal exploitations of one's control in the environment. In this talk, I will also introduce the paradigms we use to study the sense of agency as a result of perceptual processes, and our findings of individual differences in this sense and the implications.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Private oxytocin supply and its receptors in the hypothalamus for social avoidance learning
Many animals live in complex social groups. To survive, it is essential to know who to avoid and who to interact. Although naïve mice are naturally attracted to any adult conspecifics, a single defeat experience could elicit social avoidance towards the aggressor for days. The neural mechanisms underlying the behavior switch from social approach to social avoidance remains incompletely understood. Here, we identify oxytocin neurons in the retrochiasmatic supraoptic nucleus (SOROXT) and oxytocin receptor (OXTR) expressing cells in the anterior subdivision of ventromedial hypothalamus, ventrolateral part (aVMHvlOXTR) as a key circuit motif for defeat-induced social avoidance learning. After defeat, aVMHvlOXTR cells drastically increase their responses to aggressor cues. This response change is functionally important as optogenetic activation of aVMHvlOXTR cells elicits time-locked social avoidance towards a benign social target whereas inactivating the cells suppresses defeat-induced social avoidance. Furthermore, OXTR in the aVMHvl is itself essential for the behavior change. Knocking out OXTR in the aVMHvl or antagonizing the receptor during defeat, but not during post-defeat social interaction, impairs defeat-induced social avoidance. aVMHvlOXTR receives its private supply of oxytocin from SOROXT cells. SOROXT is highly activated by the noxious somatosensory inputs associated with defeat. Oxytocin released from SOROXT depolarizes aVMHvlOXTR cells and facilitates their synaptic potentiation, and hence, increases aVMHvlOXTR cell responses to aggressor cues. Ablating SOROXT cells impairs defeat-induced social avoidance learning whereas activating the cells promotes social avoidance after a subthreshold defeat experience. Altogether, our study reveals an essential role of SOROXT-aVMHvlOXTR circuit in defeat-induced social learning and highlights the importance of hypothalamic oxytocin system in social ranking and its plasticity.
Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing
Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.
Restructuring cortical feedback circuits
We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The mechanistic basis for this fundamental perceptual phenomenon has, however, remained obscure. What enables neurons in the visual system to respond to context when the stimulus is not available? While feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input. These signals are likely used to predict input from the feedforward pathway. Preliminary results show that these feedback projections are strongly influenced by experience and learning.
Feedback controls what we see
We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The visual system uses visual context to predict the content of the stimulus. What enables neurons in the visual system to respond to context when the stimulus is not available? In cortex, sensory processing is based on a combination of feedforward information arriving from sensory organs, and feedback information that originates in higher-order areas. Whereas feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the exciting discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas, in particular when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input.
Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.
Optimization at the Single Neuron Level: Prediction of Spike Sequences and Emergence of Synaptic Plasticity Mechanisms
Intelligent behavior depends on the brain’s ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on pre-dictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory motion signaling and recall in the visual system. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.
Probabilistic computation in natural vision
A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
Separable pupillary signatures of perception and action during perceptual multistability
The pupil provides a rich, non-invasive measure of the neural bases of perception and cognition, and has been of particular value in uncovering the role of arousal-linked neuromodulation, which alters cortical processing as well as pupil size. But pupil size is subject to a multitude of influences, which complicates unique interpretation. We measured pupils of observers experiencing perceptual multistability -- an ever-changing subjective percept in the face of unchanging but inconclusive sensory input. In separate conditions the endogenously generated perceptual changes were either task-relevant or not, allowing a separation between perception-related and task-related pupil signals. Perceptual changes were marked by a complex pupil response that could be decomposed into two components: a dilation tied to task execution and plausibly indicative of an arousal-linked noradrenaline surge, and an overlapping constriction tied to the perceptual transient and plausibly a marker of altered visual cortical representation. Constriction, but not dilation, amplitude systematically depended on the time interval between perceptual changes, possibly providing an overt index of neural adaptation. These results show that the pupil provides a simultaneous reading on interacting but dissociable neural processes during perceptual multistability, and suggest that arousal-linked neuromodulation shapes action but not perception in these circumstances. This presentation covers work that was published in e-life
What happens to our ability to perceive multisensory information as we age?
Our ability to perceive the world around us can be affected by a number of factors including the nature of the external information, prior experience of the environment, and the integrity of the underlying perceptual system. A particular challenge for the brain is to maintain a coherent perception from information encoded by the peripheral sensory organs whose function is affected by typical, developmental changes across the lifespan. Yet, how the brain adapts to the maturation of the senses, as well as experiential changes in the multisensory environment, is poorly understood. Over the past few years, we have used a range of multisensory tasks to investigate the role of ageing on the brain’s ability to merge sensory inputs. In particular, we have embedded an audio-visual task based on the sound-induced flash illusion (SIFI) into a large-scale, longitudinal study of ageing. Our findings support the idea that the temporal binding window (TBW) is modulated by age and reveal important individual differences in this TBW that may have clinical implications. However, our investigations also suggest the TWB is experience-dependent with evidence for both long and short term behavioural plasticity. An overview of these findings, including recent evidence on how multisensory integration may be associated with higher order functions, will be discussed.
Inhibitory connectivity and computations in olfaction
We use the olfactory system and forebrain of (adult) zebrafish as a model to analyze how relevant information is extracted from sensory inputs, how information is stored in memory circuits, and how sensory inputs inform behavior. A series of recent findings provides evidence that inhibition has not only homeostatic functions in neuronal circuits but makes highly specific, instructive contributions to behaviorally relevant computations in different brain regions. These observations imply that the connectivity among excitatory and inhibitory neurons exhibits essential higher-order structure that cannot be determined without dense network reconstructions. To analyze such connectivity we developed an approach referred to as “dynamical connectomics” that combines 2-photon calcium imaging of neuronal population activity with EM-based dense neuronal circuit reconstruction. In the olfactory bulb, this approach identified specific connectivity among co-tuned cohorts of excitatory and inhibitory neurons that can account for the decorrelation and normalization (“whitening”) of odor representations in this brain region. These results provide a mechanistic explanation for a fundamental neural computation that strictly requires specific network connectivity.
NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain
Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.
NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks
Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.
NMC4 Keynote:
The brain represents the external world through the bottleneck of sensory organs. The network of hierarchically organized neurons is thought to recover the causes of sensory inputs to reconstruct the reality in the brain in idiosyncratic ways depending on individuals and their internal states. How can we understand the world model represented in an individual’s brain, or the neuroverse? My lab has been working on brain decoding of visual perception and subjective experiences such as imagery and dreaming using machine learning and deep neural network representations. In this talk, I will outline the progress of brain decoding methods and present how subjective experiences are externalized as images and how they could be shared across individuals via neural code conversion. The prospects of these approaches in basic science and neurotechnology will be discussed.
Reflex Regulation of Innate Immunity
Reflex circuits in the nervous system integrate changes in the environment with physiology. Compact clusters of brain neuron cell bodies, termed nuclei, are essential for receiving sensory input and for transmitting motor outputs to the body. These nucelii are critical relay stations which process incoming information and convert these signals to outgoing action potentials which regulate immune system functions. Thus, reflex neural circuits maintain parameters of immunological physiology within a narrow range optimal for health. Advances in neuroscience and immunology using optogenetics, pharmacogenetics, and functional mapping offer a new understanding of the importance of neural circuitry underlying immunity, and offer direct paths to new therapies.
Representation transfer and signal denoising through topographic modularity
To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice
Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.
The diachronic account of attentional selectivity
Many models of attention assume that attentional selection takes place at a specific moment in time which demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing account is not only inaccurate, but has led to substantial conceptual confusion (to the point where some attention researchers offer to abandon the term ‘attention’ altogether). As an alternative, we offer a “diachronic” framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We present data showing that multiple sequential events can be involuntarily encoded in working memory when they appear during the same attentional episode, whether they are relevant or not. We also discuss the costs associated with processing multiple events within a single episode. Finally, we argue that breaking down the dichotomy between pre-attentive and attentive (as well as early vs. late selection) offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as “attention”.
- CANCELLED -
A recent formulation of predictive coding theory proposes that a subset of neurons in each cortical area encodes sensory prediction errors, the difference between predictions relayed from higher cortex and the sensory input. Here, we test for evidence of prediction error responses in spiking responses and local field potentials (LFP) recorded in primary visual cortex and area V4 of macaque monkeys, and in complementary electroencephalographic (EEG) scalp recordings in human participants. We presented a fixed sequence of visual stimuli on most trials, and violated the expected ordering on a small subset of trials. Under predictive coding theory, pattern-violating stimuli should trigger robust prediction errors, but we found that spiking, LFP and EEG responses to expected and pattern-violating stimuli were nearly identical. Our results challenge the assertion that a fundamental computational motif in sensory cortex is to signal prediction errors, at least those based on predictions derived from temporal patterns of visual stimulation.
Do you hear what I see: Auditory motion processing in blind individuals
Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Themes and Variations: Circuit mechanisms of behavioral evolution
Animals exhibit extraordinary variation in their behavior, yet little is known about the neural mechanisms that generate this diversity. My lab has been taking advantage of the rapid diversification of male courtship behaviors in Drosophila to glean insight into how evolution shapes the nervous system to generate species-specific behaviors. By translating neurogenetic tools from D. melanogaster to closely related Drosophila species, we have begun to directly compare the homologous neural circuits and pinpoint sites of adaptive change. Across species, P1 neurons serve as a conserved node in regulating male courtship: these neurons are selectively activated by the sensory cues indicative of an appropriate mate and their activation triggers enduring courtship displays. We have been examining how different sensory pathways converge onto P1 neurons to regulate a male’s state of arousal, honing his pursuit of a prospective partner. Moreover, by performing cross-species comparison of these circuits, we have begun to gain insight into how reweighting of sensory inputs to P1 neurons underlies species-specific mate recognition. Our results suggest how variation at flexible nodes within the nervous system can serve as a substrate for behavioral evolution, shedding light on the types of changes that are possible and preferable within brain circuits.
The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI
In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.
Molecular, receptor, and neural bases for chemosensory-mediated sexual and social behavior in mice
For many animals, the sense of olfaction plays a major role in controlling sexual behaviors. Olfaction helps animals to detect mates, discriminate their status, and ultimately, decide on their behavioral output such as courtship behavior or aggression. Specific pheromone cues and receptors have provided a useful model to study how sensory inputs are converted into certain behavioral outputs. With the aid of recent advances in tools to record and manipulate genetically defined neurons, our understanding of the neural basis of sexual and social behavior has expanded substantially. I will discuss the current understanding of the neural processing of sex pheromones and the neural circuitry which controls sexual and social behaviors and ultimately reproduction, by focusing on rodent studies, mainly in mice, and the vomeronasal sensory system.
Estimation of current and future physiological states in insular cortex
Interoception, the sense of internal bodily signals, is essential for physiological homeostasis, cognition, and emotions. While human insular cortex (InsCtx) is implicated in interoception, the cellular and circuit mechanisms remain unclear. I will describe our recent work imaging mouse InsCtx neurons during two physiological deficiency states – hunger and thirst. InsCtx ongoing activity patterns reliably tracked the gradual return to homeostasis, but not changes in behavior. Accordingly, while artificial induction of hunger/thirst in sated mice via activation of specific hypothalamic neurons (AgRP/SFOGLUT) restored cue-evoked food/water-seeking, InsCtx ongoing activity continued to reflect physiological satiety. During natural hunger/thirst, food/water cues rapidly and transiently shifted InsCtx population activity to the future satiety-related pattern. During artificial hunger/thirst, food/water cues further shifted activity beyond the current satiety-related pattern. Together with circuit-mapping experiments, these findings suggest that InsCtx integrates visceral-sensory inputs regarding current physiological state with hypothalamus-gated amygdala inputs signaling upcoming ingestion of food/water, to compute a prediction of future physiological state.
Prof. Humphries reads from "The Spike" 📖
We see the last cookie in the box and think, can I take that? We reach a hand out. In the 2.1 seconds that this impulse travels through our brain, billions of neurons communicate with one another, sending blips of voltage through our sensory and motor regions. Neuroscientists call these blips “spikes.” Spikes enable us to do everything: talk, eat, run, see, plan, and decide. In The Spike, Mark Humphries takes readers on the epic journey of a spike through a single, brief reaction. In vivid language, Humphries tells the story of what happens in our brain, what we know about spikes, and what we still have left to understand about them. Drawing on decades of research in neuroscience, Humphries explores how spikes are born, how they are transmitted, and how they lead us to action. He dives into previously unanswered mysteries: Why are most neurons silent? What causes neurons to fire spikes spontaneously, without input from other neurons or the outside world? Why do most spikes fail to reach any destination? Humphries presents a new vision of the brain, one where fundamental computations are carried out by spontaneous spikes that predict what will happen in the world, helping us to perceive, decide, and react quickly enough for our survival. Traversing neuroscience’s expansive terrain, The Spike follows a single electrical response to illuminate how our extraordinary brains work.
A theory for Hebbian learning in recurrent E-I networks
The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.
Co-tuned, balanced excitation and inhibition in olfactory memory networks
Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.
A Changing View of Vision: From Molecules to Behavior in Zebrafish
All sensory perception and every coordinated movement, as well as feelings, memories and motivation, arise from the bustling activity of many millions of interconnected cells in the brain. The ultimate function of this elaborate network is to generate behavior. We use zebrafish as our experimental model, employing a diverse array of molecular, genetic, optical, connectomic, behavioral and computational approaches. The goal of our research is to understand how neuronal circuits integrate sensory inputs and internal state and convert this information into behavioral responses.
Circuit mechanisms for synaptic plasticity in the rodent somatosensory cortex
Sensory experience and perceptual learning changes receptive field properties of cortical pyramidal neurons possibly mediated by long-term potentiation (LTP) of synapses. We have previously shown in the mouse somatosensory cortex (S1) that sensory-driven LTP in layer (L) 2/3 pyramidal neurons is dependent on higher order thalamic feedback from the posteromedial nucleus (POm), which is thought to convey contextual information from various cortical regions integrated with sensory input. We have followed up on this work by dissecting the cortical microcircuitry that underlies this form of LTP. We found that repeated pairing of Pom thalamocortical and intracortical pathway activity in brain slices induces NMDAr-dependent LTP of the L2/3 synapses that are driven by the intracortical pathway. Repeated pairing also recruits activity of vasoactive intestinal peptide (VIP) interneurons, whereas it reduces the activity of somatostatin (SST) interneurons. VIP interneuron-mediated inhibition of SST interneurons has been established as a motif for the disinhibition of pyramidal neurons. By chemogenetic interrogation we found that activation of this disinhibitory microcircuit motif by higher-order thalamic feedback is indispensable for eliciting LTP. Preliminary results in vivo suggest that VIP neuron activity also increases during sensory-evoked LTP. Together, this suggests that the higherorder thalamocortical feedback may help modifying the strength of synaptic circuits that process first-order sensory information in S1. To start characterizing the relationship between higher-order feedback and cortical plasticity during learning in vivo, we adapted a perceptual learning paradigm in which head-fixed mice have to discriminate two types of textures in order to obtain a reward. POm axons or L2/3 pyramidal neurons labeled with the genetically encoded calcium indicator GCaMP6s were imaged during the acquisition of this task as well as the subsequent learning of a new discrimination rule. We found that a subpopulation of the POm axons and L2/3 neurons dynamically represent textures. Moreover, upon a change in reward contingencies, a fraction of the L2/3 neurons re-tune their selectivity to the texture that is newly associated with the reward. Altogether, our data indicates that higher-order thalamic feedback can facilitate synaptic plasticity and may be implicated in dynamic sensory stimulus representations in S1, which depends on higher-order features that are associated with the stimuli.
Mechanisms of cortical communication during decision-making
Regulation of information flow in the brain is critical for many forms of behavior. In the process of sensory based decision-making, decisions about future actions are held in memory until enacted, making them potentially vulnerable to distracting sensory input. Therefore, gating of information flow from sensory to motor areas could protect memory from interference during decision-making, but the underlying network mechanisms are not understood. I will present our recent experimental and modeling work describing how information flow from the sensory cortex can be gated by state-dependent frontal cortex dynamics during decision-making in mice. Our results show that communication between brain regions can be regulated via attractor dynamics, which control the degree of commitment to an action, and reveal a novel mechanism of gating of neural information.
Sensory and metasensory responses during sequence learning in the mouse somatosensory cortex
Sequential temporal ordering and patterning are key features of natural signals, used by the brain to decode stimuli and perceive them as sensory objects. Touch is one sensory modality where temporal patterning carries key information, and the rodent whisker system is a prominent model for understanding neuronal coding and plasticity underlying touch sensation. Neurons in this system are precise encoders of fluctuations in whisker dynamics down to a timescale of milliseconds, but it is not clear whether they can refine their encoding abilities as a result of learning patterned stimuli. For example, can they enhance temporal integration to become better at distinguishing sequences? To explore how cortical coding plasticity underpins sequence discrimination, we developed a task in which mice distinguished between tactile ‘word’ sequences constructed from distinct vibrations delivered to the whiskers, assembled in different orders. Animals licked to report the presence of the target sequence. Optogenetic inactivation showed that the somatosensory cortex was necessary for sequence discrimination. Two-photon imaging in layer 2/3 of the primary somatosensory “barrel” cortex (S1bf) revealed that, in well-trained animals, neurons had heterogeneous selectivity to multiple task variables including not just sensory input but also the animal’s action decision and the trial outcome (presence or absence of the predicted reward). Many neurons were activated preceding goal-directed licking, thus reflecting the animal’s learnt action in response to the target sequence; these neurons were found as soon as mice learned to associate the rewarded sequence with licking. In contrast, learning evoked smaller changes in sensory response tuning: neurons responding to stimulus features were already found in naïve mice, and training did not generate neurons with enhanced temporal integration or categorical responses. Therefore, in S1bf sequence learning results in neurons whose activity reflects the learnt association between target sequence and licking, rather than a refined representation of sensory features. Taken together with results from other laboratories, our findings suggest that neurons in sensory cortex are involved in task-specific processing and that an animal does not sense the world independently of what it needs to feel in order to guide behaviour.
Nature, nurture and synaptic adhesion in between
Exposure to proper environment during early development is essential for brain maturation. Impaired sensory input or abnormal experiences can have long-term negative consequences on brain health. We seek to define the precise synaptic aberrations caused by abnormal visual experiences early in life, and how these can be remedied through viral, genetic and environmental approaches. Resulting knowledge will contribute to the development of new approaches to mitigate nervous system damage caused by abnormal early life experience.
Predictive processing in the macaque frontal cortex during time estimation
According to the theory of predictive processing, expectations modulate neural activity so as to optimize the processing of sensory inputs expected in the current environment. While there is accumulating evidence that the brain indeed operates under this principle, most of the attention has been placed on mechanisms that rely on static coding properties of neurons. The potential contribution of dynamical features, such as those reflected in the evolution of neural population dynamics, has thus far been overlooked. In this talk, I will present evidence for a novel mechanism for predictive processing in the temporal domain which relies on neural population dynamics. I will use recordings from the frontal cortex of macaques trained on a time interval reproduction task and show how neural dynamics can be directly related to animals’ temporal expectations, both in a stationary environment and during learning.
Cortical estimation of current and future bodily states
Interoception, the sense of internal bodily signals, is essential for physiological homeostasis, cognition, and emotions. Human neuroimaging studies suggest insular cortex plays a central role in interoception, yet the cellular and circuit mechanisms of its involvement remain unclear. We developed a microprism-based cellular imaging approach to monitor insular cortex activity in behaving mice across different physiological need states. We combine this imaging approach with manipulations of peripheral physiology, circuit-mapping, cell type-specific and circuit-specific manipulation approaches to investigate the underlying circuit mechanisms. I will present our recent data investigating insular cortex activity during two physiological need states – hunger and thirst. These wereinduced naturally by caloric/fluid deficiency, or artificially by activation of specific hypothalamic “hunger neurons” and “thirst neurons”. We found that insular cortex ongoing activity faithfully represents current physiological state, independently of behavior or arousal levels. In contrast, transient responses to learned food- or water-predicting cues reflect a population-level “simulation” of future predicted satiety. Together with additional circuit-mapping and manipulation experiments, our findings suggest that insular cortex integrates visceral-sensory inputs regarding current physiological state with hypothalamus-gated amygdala inputs signaling availability of food/water. This way, insular cortex computes a prediction of future physiological state that can be used to guide behavioral choice.
An evolutionarily conserved hindwing circuit mediates Drosophila flight control
My research at the interface of neurobiology, biomechanics, and behavior seeks to understand how the timing precision of sensory input structures locomotor output. My lab studies the flight behavior of the fruit fly, Drosophila melanogaster, combining powerful genetic tools available for labeling and manipulating neural circuits with cutting-edge imaging in awake, behaving animals. This work has the potential to fundamentally reshape understanding of the evolution of insect flight, as well as highlight the tremendous importance of timing in the context of locomotion. Timing is crucial to the nervous system. The ability to rapidly detect and process subtle disturbances in the environment determines whether an animal can attain its next meal or successfully navigate complex, unpredictable terrain. While previous work on various animals has made tremendous strides uncovering the specialized neural circuits used to resolve timing differences with sub-microsecond resolution, it has focused on the detection of timing differences in sensory systems. Understanding of how the timing of motor output is structured by precise sensory input remains poor. My research focuses on an organ unique to fruit flies, called the haltere, that serves as a bridge for detecting and acting on subtle timing differences, helping flies execute rapid maneuvers. Understanding how this relatively simple insect canperform such impressive aerial feats demands an integrative approach that combines physics, muscle mechanics, neuroscience, and behavior. This unique, powerful approach will reveal the general principles that govern sensorimotor processing.
A Rare Visuospatial Disorder
Cases with visuospatial abnormalities provide opportunities for understanding the underlying cognitive mechanisms. Three cases of visual mirror-reversal have been reported: AH (McCloskey, 2009), TM (McCloskey, Valtonen, & Sherman, 2006) and PR (Pflugshaupt et al., 2007). This research reports a fourth case, BS -- with focal occipital cortical dysgenesis -- who displays highly unusual visuospatial abnormalities. They initially produced mirror reversal errors similar to those of AH, who -- like the patient in question -- showed a selective developmental deficit. Extensive examination of BS revealed phenomena such as: mirror reversal errors (sometimes affecting only parts of the visual fields) in both horizontal and vertical planes; subjective representation of visual objects and words in distinct left and right visual fields; subjective duplication of objects of visual attention (not due to diplopia); uncertainty regarding the canonical upright orientation of everyday objects; mirror reversals during saccadic eye movements on oculomotor tasks; and failure to integrate visual with other sensory inputs (e.g., they feel themself moving backwards when visual information shows they are moving forward). Fewer errors are produced under conditions of certain visual variables. These and other findings have led the researchers to conclude that BS draws upon a subjective representation of visual space that is structured phenomenally much as it is anatomically in early visual cortex (i.e., rotated through 180 degrees, split into left and right fields, etc.). Despite this, BS functions remarkably well in their everyday life, apparently due to extensive compensatory mechanisms deployed at higher (executive) processing levels beyond the visual modality.
Cholinergic regulation of learning in the olfactory system
In the olfactory system, cholinergic modulation has been associated with contrast modulation and changes in receptive fields in the olfactory bulb, as well the learning of odor associations in the olfactory cortex. Computational modeling and behavioral studies suggest that cholinergic modulation could improve sensory processing and learning while preventing pro-active interference when task demands are high. However, how sensory inputs and/or learning regulate incoming modulation has not yet been elucidated. We here use a computational model of the olfactory bulb, piriform cortex (PC) and horizontal limb of the diagonal band of Broca (HDB) to explore how olfactory learning could regulate cholinergic inputs to the system in a closed feedback loop. In our model, the novelty of an odor is reflected in firing rates and sparseness of cortical neurons in response to that odor and these firing rates can directly regulate learning in the system by modifying cholinergic inputs to the system.
Theme and variations: circuit mechanisms of behavioural evolution
Animals exhibit extraordinary variation in their behaviour, yet little is known about the neural mechanisms that generate this diversity. My lab has been taking advantage of the rapid diversification of male courtship behaviours in Drosophila to gain insight into how evolution shapes the nervous system to generate species-specific behaviours. By translating neurogenetic tools from D. melanogaster to closely related Drosophila species, we have begun to directly compare the homologous neural circuits and pinpoint sites of adaptive change. Across species, P1 interneurons serve as a conserved and key node in regulating male courtship: these neurons are selectively activated by the sensory cues carried by an appropriate mate and their activation triggers enduring courtship displays. We have been examining how different sensory pathways converge onto P1 neurons to regulate a male’s state of arousal, honing his pursuit of a prospective partner. Moreover, by performing cross-species comparison of these circuits, we have begun to gain insight into how reweighting of sensory inputs to P1 neurons underlies species-specific mate recognition. Our results suggest how variation at flexible nodes within the nervous system can serve as a substrate for behavioural evolution, shedding light on the types of changes that are possible and preferable within brain circuits.
Adaptive plasticity in adult brain circuitry during naturally occurring regeneration of sensory inputs
FENS Forum 2024
Astrocytes act as detectors of sensory input and calcium-dependent regulators of experience-dependent plasticity in cortical networks
FENS Forum 2024
Experience-dependent modulation of sensory inputs in the postpartum hypothalamus for infant-directed motor actions
FENS Forum 2024
A neural substrate for encoding the probability of sensory inputs
FENS Forum 2024
Visual thalamus adaptive response to imbalanced sensory input: Decrypting molecular mechanisms in amblyopia
FENS Forum 2024