Sensory Coding
sensory coding
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Ján Antolík
The postdoctoral position is within the Computational Systems Neuroscience Group (CSNG) at Charles University, Prague, focusing on computational neuroscience and neuro-prosthetic system design. The project goals include developing a large-scale model of electrical stimulation in the primary visual cortex for neuro-prosthetic vision restoration, creating and refining models of the primary visual cortex and its electrical stimulation, simulating the impact of external stimulation on cortical activity, developing novel machine learning methods to link simulated cortical activity to expected visual perceptions, and developing stimulation protocols for neuro-prosthetic systems. This project is undertaken as a part of a larger consortium of Czech experimental and theoretical neuroscience teams.
Ján Antolík
A postdoctoral position within the Computational Systems Neuroscience Group (CSNG) at Charles University, Prague, focusing on computational neuroscience and neuro-prosthetic system design. The group explores the intricacies of the visual system, sensory coding, and neuro-prosthetic solutions using computational approaches such as large-scale biologically detailed spiking network models, firing-rate models of development, and modern machine learning techniques. The team is dedicated to understanding visual perception and its restoration via neuro-prosthetic devices. Multiple project topics are available and can be adjusted to the interest and background of the applicant, including modeling electrical stimulation in a spiking model of the primary visual cortex, deep-neural networks in visual neuroscience, study of cortical dynamics in the visual cortex, and biologically detailed spiking large-scale models of early visual cortical pathway from Retina to V4.
Self-perception: mechanosensation and beyond
Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.
A Panoramic View on Vision
Statistics of natural scenes are not uniform - their structure varies dramatically from ground to sky. It remains unknown whether these non-uniformities are reflected in the large-scale organization of the early visual system and what benefits such adaptations would confer. By deploying an efficient coding argument, we predict that changes in the structure of receptive fields across visual space increase the efficiency of sensory coding. To test this experimentally, developed a simple, novel imaging system that is indispensable for studies at this scale. In agreement with our predictions, we could show that receptive fields of retinal ganglion cells change their shape along the dorsoventral axis, with a marked surround asymmetry at the visual horizon. Our work demonstrates that, according to principles of efficient coding, the panoramic structure of natural scenes is exploited by the retina across space and cell-types.
Why Some Intelligent Agents are Conscious
In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. First, the agent is stipulated to be capable of (meta)representing subjective ‘qualitative’ sensory information, in the sense that it can easily assess how exactly similar a sensory signal is to all other possible sensory signals. This information is subjective in the sense that it concerns how the different stimuli can be distinguished by the agent itself, rather than how physically similar they are. For this to happen, sensory coding needs to satisfy sparsity and smoothness constraints, which are known to facilitate metacognition and generalization. Second, this qualitative information can under some specific circumstances acquire an ‘assertoric force’. This happens when a certain self-monitoring mechanism decides that the qualitative information reliably tracks the current state of the world, and informs a general symbolic reasoning system of this fact. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, the perceptual content presents itself as reflecting the state of the world right now, in ways that seem undeniably rational to the agent. At the same time, without effort, the agent also knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this architecture, for which consciousness might have arisen as a byproduct.
A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice
Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.
Sensory and metasensory responses during sequence learning in the mouse somatosensory cortex
Sequential temporal ordering and patterning are key features of natural signals, used by the brain to decode stimuli and perceive them as sensory objects. Touch is one sensory modality where temporal patterning carries key information, and the rodent whisker system is a prominent model for understanding neuronal coding and plasticity underlying touch sensation. Neurons in this system are precise encoders of fluctuations in whisker dynamics down to a timescale of milliseconds, but it is not clear whether they can refine their encoding abilities as a result of learning patterned stimuli. For example, can they enhance temporal integration to become better at distinguishing sequences? To explore how cortical coding plasticity underpins sequence discrimination, we developed a task in which mice distinguished between tactile ‘word’ sequences constructed from distinct vibrations delivered to the whiskers, assembled in different orders. Animals licked to report the presence of the target sequence. Optogenetic inactivation showed that the somatosensory cortex was necessary for sequence discrimination. Two-photon imaging in layer 2/3 of the primary somatosensory “barrel” cortex (S1bf) revealed that, in well-trained animals, neurons had heterogeneous selectivity to multiple task variables including not just sensory input but also the animal’s action decision and the trial outcome (presence or absence of the predicted reward). Many neurons were activated preceding goal-directed licking, thus reflecting the animal’s learnt action in response to the target sequence; these neurons were found as soon as mice learned to associate the rewarded sequence with licking. In contrast, learning evoked smaller changes in sensory response tuning: neurons responding to stimulus features were already found in naïve mice, and training did not generate neurons with enhanced temporal integration or categorical responses. Therefore, in S1bf sequence learning results in neurons whose activity reflects the learnt association between target sequence and licking, rather than a refined representation of sensory features. Taken together with results from other laboratories, our findings suggest that neurons in sensory cortex are involved in task-specific processing and that an animal does not sense the world independently of what it needs to feel in order to guide behaviour.
Microneurography And Microstimulation Of Single Tactile Afferents In The Human Hand
Microneurography is a method, invented by Ake Vallbo and Karl-Erik Hagbarth in the late 1960, with which we can record the activity from single, identified nerve fibres in awake human participants. In this talk, I will then discuss the method, its advantages and limitations, and some of the key discoveries regarding coding of tactile events in the signalling from receptors in the human skin. An extension of the method is to stimulate single afferents, and record the resulting tactile sensations reported by the participants, so-called microstimulation. The first experiments were done in the 1980s, but the method has recently seen a revival, and is currently being combined with high-resolution brain imaging in the study of the relationship between tactile nerve signals, sensations, and processing of tactile information in the brain.
A robust neural code for human odor in the Aedes aegpyti mosquito brain
A globally invasive form of the mosquito Aedes aegypti has evolved to specialize in biting humans, making it an efficient vector of dengue, yellow fever, Zika, and chikungunya. Host-seeking females identify humans primarily by smell, strongly preferring human odour over the odor of non-human animals. Exactly how they discriminate, however, is unclear. Human and animal odors are complex blends that share most of the same chemical components, presenting an interesting challenge in sensory coding. I will talk about recent work from the lab showing that (1) human and animal blends can be distinguished by the relative concentration of a diverse array of compounds and that (2) these complex chemical differences translate into a neural code for human odor that involves as few as two to three olfactory glomeruli in the mosquito brain. Our work demonstrates how organisms may evolve to discriminate complex odor stimuli of special biological relevance with a surprisingly simple combinatorial code and reveals novel targets for the design of next-generation mosquito control strategies.
Cortical feedback shapes high order structure of population activity to improve sensory coding
Bernstein Conference 2024
Homeostatic information transmission as a principle for sensory coding during movement
Bernstein Conference 2024
Information-preserving modulation as a principle of sensory coding during locomotion across species
COSYNE 2025
Conservation of sensory coding in the auditory cortex of mice between wakefulness and sleep
FENS Forum 2024