← Back

Neural Variability

Topic spotlight
TopicWorld Wide

neural variability

Discover seminars, jobs, and research tagged with neural variability across World Wide.
18 curated items10 ePosters8 Seminars
Updated in 3 days
18 items · neural variability
18 results
SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 26, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

Rastermap: Extracting structure from high dimensional neural data

Carsen Stringer
HHMI, Janelia Research Campus
Oct 26, 2021

Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

ePoster

A brain-computer interface in prefrontal cortex that suppresses neural variability

COSYNE 2022

ePoster

Diverse covariates modulate neural variability: a widespread (sub)cortical phenomenon

COSYNE 2022

ePoster

Sensory specific modulation of neural variability facilitates perceptual inference

COSYNE 2022

ePoster

Sensory specific modulation of neural variability facilitates perceptual inference

COSYNE 2022

ePoster

Single cell measures of tuning to imagined position during replay show preserved spatial tuning but quenched neural variability in place cells.

COSYNE 2022

ePoster

Single cell measures of tuning to imagined position during replay show preserved spatial tuning but quenched neural variability in place cells.

COSYNE 2022

ePoster

Conductance Based Integrate and Fire Model with Correlated Inputs Captures Neural Variability

Logan Becker, Thibaud Taillefumier, Nicholas Priebe, Eyal Seidemannn, Baowang Li

COSYNE 2023

ePoster

Balanced two-photon holographic bidirectional optogenetics defines the mechanism for stimulus quenching of neural variability

Kevin Sit, Brent Doiron, Chengcheng Huang, Hillel Adesnik

COSYNE 2025

ePoster

The Role of Neural Variability in Supporting Few-shot Generalization in Cortex

Praveen Venkatesh, Jiaqi Shang, Corbett Bennett, Sam Gale, Greggory Heller, Tamina Ramirez, Severine Durand, Eric Shea-Brown, Shawn Olsen, Stefan Mihalas

COSYNE 2025

ePoster

Age-related changes in neural variability in a decision-making task

Fenying Zang, Anup Khanal, Sonja Förster, International Brain Laboratory, Anne K Churchland, Anne E Urai

FENS Forum 2024