Incoming Stimuli
incoming stimuli
Invariant neural subspaces maintained by feedback modulation
Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.
Heartbeat-based auditory regularities induce prediction in human wakefulness and sleep
Exposure to sensory regularities in the environment induces the human brain to form expectations about incoming stimuli and remains partially preserved in the absence of consciousness (i.e. coma and sleep). While regularity often refers to stimuli presented at a fixed pace, we recently explored whether auditory prediction extends to pseudo-regular sequences where sensory prediction is induced by locking sound onsets to heartbeat signals and whether it can occur across vigilance states. In a series of experiments in healthy volunteers, we found neural and cardiac evidence of auditory prediction during heartbeat-based auditory regularities in wakefulness and N2 sleep. This process could represent an important mechanism for detecting unexpected stimuli in the environment even in states of limited conscious and attentional resources.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Learning from unexpected events in the neocortical microcircuit
Predictive learning hypotheses posit that the neocortex learns a hierarchical model of the structure of features in the environment. Under these hypotheses, expected or predictable features are differentiated from unexpected ones by comparing bottom-up and top-down streams of data, with unexpected features then driving changes in the representation of incoming stimuli. This is supported by numerous studies in early sensory cortices showing that pyramidal neurons respond particularly strongly to unexpected stimulus events. However, it remains unknown how their responses govern subsequent changes in stimulus representations, and thus, govern learning. Here, I present results from our study of layer 2/3 and layer 5 pyramidal neurons imaged in primary visual cortex of awake, behaving mice using two-photon calcium microscopy at both the somatic and distal apical planes. Our data reveals that individual neurons and distal apical dendrites show distinct, but predictable changes in unexpected event responses when tracked over several days. Considering existing evidence that bottom-up information is primarily targeted to somata, with distal apical dendrites receiving the bulk of top-down inputs, our findings corroborate hypothesized complementary roles for these two neuronal compartments in hierarchical computing. Altogether, our work provides novel evidence that the neocortex indeed instantiates a predictive hierarchical model in which unexpected events drive learning.
Expectation of self-generated sounds drives predictive processing in mouse auditory cortex
Sensory stimuli are often predictable consequences of one’s actions, and behavior exerts a correspondingly strong influence over sensory responses in the brain. Closed-loop experiments with the ability to control the sensory outcomes of specific animal behaviors have revealed that neural responses to self-generated sounds are suppressed in the auditory cortex, suggesting a role for prediction in local sensory processing. However, it is unclear whether this phenomenon derives from a precise movement-based prediction or how it affects the neural representation of incoming stimuli. We address these questions by designing a behavioral paradigm where mice learn to expect the predictable acoustic consequences of a simple forelimb movement. Neuronal recordings from auditory cortex revealed suppression of neural responses that was strongest for the expected tone and specific to the time of the sound-associated movement. Predictive suppression in the auditory cortex was layer-specific, preceded by the arrival of movement information, and unaffected by behavioral relevance or reward association. These findings illustrate that expectation, learned through motor-sensory experience, drives layer-specific predictive processing in the mouse auditory cortex.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.