Topic spotlight
TopicWorld Wide

noise

Discover seminars, jobs, and research tagged with noise across World Wide.
69 curated items43 Seminars26 ePosters
Updated in 3 days
69 items · noise
69 results
SeminarNeuroscience

Developmental emergence of personality

Bassem Hassan
Paris Brain Institute, ICM, France
Dec 9, 2025

The Nature versus Nurture debate has generally been considered from the lens of genome versus experience dichotomy and has dominated our thinking about behavioral individuality and personality traits. In contrast, the role of nonheritable noise during brain development in behavioral variation is understudied. Using the Drosophila melanogaster visual system, I will discuss our efforts to dissect how individuality in circuit wiring emerges during development, and how that helps generate individual behavioral variation.

SeminarNeuroscience

Non-invasive human neuroimaging studies of motor plasticity have predominantly focused on the cerebral cortex due to low signal-to-noise ration of blood oxygen level-dependent (BOLD) signals in subcortical structures and the small effect sizes typically observed in plasticity paradigms. Precision functional mapping can help overcome these challenges and has revealed significant and reversible functional alterations in the cortico-subcortical motor circuit during arm immobilization

Dr. Roselyne Chauvin
Washington University, St. Louis, USA
Jul 8, 2025
SeminarNeuroscience

Neurobiological constraints on learning: bug or feature?

Cian O’Donell
Ulster University
Jun 10, 2025

Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.

SeminarNeuroscience

Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope

Alexis Hervais-Adelman
University of Geneva
May 20, 2024

Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.

SeminarNeuroscienceRecording

Signatures of criticality in efficient coding networks

Shervin Safavi
Dayan lab, MPI for Biological Cybernetics
May 2, 2023

The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related

SeminarCognition

Cognition in the Wild

Julia Fischer
German Primate Center
Mar 15, 2023

What do nonhuman primates know about each other and their social environment, how do they allocate their attention, and what are the functional consequences of social decisions in natural settings? Addressing these questions is crucial to hone in on the co-evolution of cognition, social behaviour and communication, and ultimately the evolution of intelligence in the primate order. I will present results from field experimental and observational studies on free-ranging baboons, which tap into the cognitive abilities of these animals. Baboons are particularly valuable in this context as different species reveal substantial variation in social organization and degree of despotism. Field experiments revealed considerable variation in the allocation of social attention: while the competitive chacma baboons were highly sensitive to deviations from the social order, the highly tolerant Guinea baboons revealed a confirmation bias. This bias may be a result of the high gregariousness of the species, which puts a premium on ignoring social noise. Variation in despotism clearly impacted the use of signals to regulate social interactions. For instance, male-male interactions in chacma baboons mostly comprised dominance displays, while Guinea baboon males evolved elaborate greeting rituals that serve to confirm group membership and test social bonds. Strikingly, the structure of signal repertoires does not differ substantially between different baboon species. In conclusion, the motivational disposition to engage in affiliation or aggressiveness appears to be more malleable during evolution than structural elements of the behavioral repertoire; this insight is crucial for understanding the dynamics of social evolution.

SeminarNeuroscience

Ambient noise reveals rapid flexibility in marmoset vocal behavior

Julia Löschner
Mar 9, 2023
SeminarNeuroscienceRecording

Does subjective time interact with the heart rate?

Saeedeh Sadegh
Cornell University, New York
Jan 24, 2023

Decades of research have investigated the relationship between perception of time and heart rate with often mixed results. In search of such a relationship, I will present my far journey between two projects: from time perception in the realistic VR experience of crowded subway trips in the order of minutes (project 1); to the perceived duration of sub-second white noise tones (project 2). Heart rate had multiple concurrent relationships with subjective temporal distortions for the sub-second tones, while the effects were lacking or weak for the supra-minute subway trips. What does the heart have to do with sub-second time perception? We addressed this question with a cardiac drift-diffusion model, demonstrating the sensory accumulation of temporal evidence as a function of heart rate.

SeminarPsychology

The Effects of Negative Emotions on Mental Representation of Faces

Fabiana Lombardi
University of Winchester
Nov 22, 2022

Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.

SeminarNeuroscienceRecording

Universal function approximation in balanced spiking networks through convex-concave boundary composition

W. F. Podlaski
Champalimaud
Nov 9, 2022

The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscience

Adaptive neural network classifier for decoding finger movements

Alexey Zabolotniy
HSE University
Jun 1, 2022

While non-invasive Brain-to-Computer interface can accurately classify the lateralization of hand moments, the distinction of fingers activation in the same hand is limited by their local and overlapping representation in the motor cortex. In particular, the low signal-to-noise ratio restrains the opportunity to identify meaningful patterns in a supervised fashion. Here we combined Magnetoencephalography (MEG) recordings with advanced decoding strategy to classify finger movements at single trial level. We recorded eight subjects performing a serial reaction time task, where they pressed four buttons with left and right index and middle fingers. We evaluated the classification performance of hand and finger movements with increasingly complex approaches: supervised common spatial patterns and logistic regression (CSP + LR) and unsupervised linear finite convolutional neural network (LF-CNN). The right vs left fingers classification performance was accurate above 90% for all methods. However, the classification of the single finger provided the following accuracy: CSP+SVM : – 68 ± 7%, LF-CNN : 71 ± 10%. CNN methods allowed the inspection of spatial and spectral patterns, which reflected activity in the motor cortex in the theta and alpha ranges. Thus, we have shown that the use of CNN in decoding MEG single trials with low signal to noise ratio is a promising approach that, in turn, could be extended to a manifold of problems in clinical and cognitive neuroscience.

SeminarNeuroscienceRecording

The balance of excitation and inhibition and a canonical cortical computation

Yashar Ahmadian
Cambridge, UK
Apr 26, 2022

Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. The balanced network model provides a self-consistent account of this observation: population rates dynamically adjust to yield a state in which all neurons are active at biological levels, with their E & I inputs tightly balanced. But global tight E/I balance predicts population responses with linear stimulus-dependence and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which only a localized subset of neurons are active and have balanced inputs. We analytically show that in networks of neurons with different stimulus selectivities, the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), which predicts a regime of loose, rather than tight, E/I balance. However, an understanding of the causal relationship between E/I balance and normalization in SSN and conditions under which SSN yields significant sublinear integration are lacking. For weak inputs, SSN integrates inputs supralinearly, while for very strong inputs it approaches a regime of tight balance. We show that when this latter regime is globally balanced, SSN cannot exhibit strong normalization for any input strength; thus, in SSN too, significant normalization requires localized balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation. Time allowing I will also cover our work extending a normative theoretical account of normalization which explains it as an example of efficient coding of natural stimuli. We show that when biological noise is accounted for, this theory makes the same prediction as the SSN: a transition to supralinear integration for weak stimuli.

SeminarNeuroscienceRecording

Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics

Robert Guetig
Charité – Universitätsmedizin Berlin & BIH
Mar 8, 2022

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

SeminarNeuroscienceRecording

Structure, Function, and Learning in Distributed Neuronal Networks

SueYeon Chung
Flatiron Institute/NYU
Jan 25, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.

SeminarNeuroscience

A nonlinear shot noise model for calcium-based synaptic plasticity

Bin Wang
Aljadeff lab, University of California San Diego, USA
Dec 8, 2021

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

SeminarNeuroscience

Finding needles in the neural haystack: unsupervised analyses of noisy data

Marine Schimel & Kris Jensen
University of Cambridge, Department of Engineering
Nov 30, 2021

In modern neuroscience, we often want to extract information from recordings of many neurons in the brain. Unfortunately, the activity of individual neurons is very noisy, making it difficult to relate to cognition and behavior. Thankfully, we can use the correlations across time and neurons to denoise the data we record. In particular, using recent advances in machine learning, we can build models which harness this structure in the data to extract more interpretable signals. In this talk, we present two such methods as well as examples of how they can help us gain further insights into the neural underpinnings of behavior.

SeminarNeuroscienceRecording

Noise-induced properties of active dendrites

Farzada Farkhooi
Humboldt University Berlin
Nov 16, 2021

Neuronal dendritic trees display a wide range of nonlinear input integrations due to their voltage-dependent active calcium channels. We reveal that in vivo-like fluctuating input enhances nonlinearity substantially in a single dendritic compartment and shifts the input-output relation to exhibiting nonmonotonous or bistable dynamics. In particular, with the slow activation of calcium dynamics, we analyze noise-induced bistability and its timescales. We show bistability induces long-timescale fluctuation that can account for observed dendritic plateau potentials in vivo conditions. In a multicompartmental model neuron with realistic synaptic input, we show that noise-induced bistability persists in a wide range of parameters. Using Fredholm's theory to calculate the spiking rate of multivariable neurons, we discuss how dendritic bistability shifts the spiking dynamics of single neurons and its implications for network phenomena in the processing of in vivo–like fluctuating input.

SeminarNeuroscience

Representation transfer and signal denoising through topographic modularity

Barna Zajzon
Morrison lab, Forschungszentrum Jülich, Germany
Nov 3, 2021

To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

SeminarNeuroscienceRecording

Self-organized formation of discrete grid cell modules from smooth gradients

Sarthak Chandra
Fiete lab, MIT
Nov 2, 2021

Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.

SeminarNeuroscienceRecording

Adaptation-driven sensory detection and sequence memory

André Longtin
University of Ottawa
Oct 5, 2021

Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.

SeminarNeuroscienceRecording

Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise

Monty Escabi
University of Connecticut
Sep 30, 2021

Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.

SeminarNeuroscience

Computation noise in human learning and decision-making: origin, impact, function

Valentin Wyart
Ecole Normale Supérieure de Paris, France
May 30, 2021
SeminarNeuroscience

Co-tuned, balanced excitation and inhibition in olfactory memory networks

Claire Meissner-Bernard
Friedrich lab, Friedrich Miescher Institute, Basel, Switzerland
May 19, 2021

Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.

SeminarNeuroscienceRecording

Error correction and reliability timescale in converging cortical networks

Eran Stark
Tel Aviv University
Apr 28, 2021

Rapidly changing inputs such as visual scenes and auditory landscapes are transmitted over several synaptic interfaces and perceived with little loss of detail, but individual neurons are typically “noisy” and cortico-cortical connections are typically “weak”. To understand how information embodied in spike train is transmitted in a lossless manner, we focus on a single synaptic interface: between pyramidal cells and putative interneurons. Using arbitrary white noise patterns injected intra-cortically as photocurrents to freely-moving mice, we find that directly-activated cells exhibit precision of several milliseconds, but post-synaptic, indirectly-activated cells exhibit higher precision. Considering multiple identical messages, the reliability of directly-activated cells peaks at a timescale of dozens of milliseconds, whereas indirectly-activated cells exhibit an order-of-magnitude faster timescale. Using data-driven modelling, we find that error correction is consistent with non-linear amplification of coincident spikes.

SeminarNeuroscienceRecording

Decoding the neural processing of speech

Tobias Reichenbach
Friedrich-Alexander-University
Mar 22, 2021

Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we process speech remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in different types of background noise. In particular, we develop statistical models to relate characteristic features of speech such as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations and linguistic structures. We also show that some of these measures allow to diagnose disorders of consciousness. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as in the diagnosis of brain disorders.

SeminarPhysics of Life

Anomalous run-to-tumble switching noise controls E.coli residence time at surfaces

Eric Clément
Jan 28, 2021
SeminarNeuroscienceRecording

Motor Cortex in Theory and Practice

Mark Churchland
Columbia University, New York
Nov 29, 2020

A central question in motor physiology has been whether motor cortex activity resembles muscle activity, and if not, why not? Over fifty years, extensive observations have failed to provide a concise answer, and the topic remains much debated. To provide a different perspective, we employed a novel behavioral paradigm that affords extensive comparison between time-evolving neural and muscle activity. Single motor-cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ’trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Remarkably, we were able to predict motor cortex activity from muscle activity alone, by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling. Our results argue that motor cortex embeds descending commands in additional structure that ensure low tangling, and thus noise-robustness. The dominant structure in motor cortex may thus serve not a representational function (encoding specific variables) but a computational function: ensuring that outgoing commands can be generated reliably. Our results establish the utility of an emerging approach: understanding the structure of neural activity based on properties of population geometry that flow from normative principles such as noise robustness.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

SeminarNeuroscience

Rapid State Changes Account for Apparent Brain and Behavior Variability

David McCormick
University of Oregon
Sep 16, 2020

Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.

SeminarNeuroscienceRecording

On the purpose and origin of spontaneous neural activity

Tim Vogels
IST Austria
Sep 3, 2020

Spontaneous firing, observed in many neurons, is often attributed to ion channel or network level noise. Cortical cells during slow wave sleep exhibit transitions between so called Up and Down states. In this sleep state, with limited sensory stimuli, neurons fire in the Up state. Spontaneous firing is also observed in slices of cholinergic interneurons, cerebellar Purkinje cells and even brainstem inspiratory neurons. In such in vitro preparations, where the functional relevance is long lost, neurons continue to display a rich repertoire of firing properties. It is perplexing that these neurons, instead of saving their energy during information downtime and functional irrelevance, are eager to fire. We propose that spontaneous firing is not a chance event but instead, a vital activity for the well-being of a neuron. We postulate that neurons, in anticipation of synaptic inputs, keep their ATP levels at maximum. As recovery from inputs requires most of the energy resources, neurons are ATP surplus and ADP scarce during synaptic quiescence. With ADP as the rate-limiting step, ATP production stalls in the mitochondria when ADP is low. This leads to toxic Reactive Oxygen Species (ROS) formation, which are known to disrupt many cellular processes. We hypothesize that spontaneous firing occurs at these conditions - as a release valve to spend energy and to restore ATP production, shielding the neuron against ROS. By linking a mitochondrial metabolism model to a conductance-based neuron model, we show that spontaneous firing depends on baseline ATP usage and on ATP-cost-per-spike. From our model, emerges a mitochondrial mediated homeostatic mechanism that provides a recipe for different firing patterns. Our findings, though mostly affecting intracellular dynamics, may have large knock-on effects on the nature of neural coding. Hitherto it has been thought that the neural code is optimised for energy minimisation, but this may be true only when neurons do not experience synaptic quiescence.

SeminarNeuroscienceRecording

Distributed replay in the human brain, and how to find it

Nicolas Schuck
MPI Berlin
Jul 28, 2020

I will present work on a novel fMRI analysis method that allows us to investigate sequential reactivation in the hippocampus. Our method focuses on analysing the time courses of probabilistic multivariate classifiers and allows us to infer the presence and frequency of fast sequential reactivation events. Using a paradigm in which we controlled the speed of sequential visually elicited activations, we validated the method in visual cortex for event sequences with only 32 ms between items. We show that detectability remains possible if low signal-to-noise ratio and when sequence events occur at unknown times. In a preliminary analysis, we show that even the exposure to our visual paradigm elicits reactivations in visual cortex at rest following the task. I then present work in which we tested how representations influence replay by asking whether transitions between task-state representations are reactivated at rest during hippocampal replay events. Participants learned to make decisions about ambiguous stimuli that depended on past events and attentionally filtered stimulus processing. FMRI signals during rest periods following this task indicated sequential reactivation of task states. These results indicate that adaptive task state representations are computed and replayed at different cortical sites. In combination with other methods, fMRI may allow us to unravel this coordinated nature of replay.

SeminarNeuroscienceRecording

Multi-resolution Multi-task Gaussian Processes: London air pollution

Ollie Hamelijnck
The Alan Turing Institute, London
Jul 8, 2020

Poor air quality in cities is a significant threat to health and life expectancy, with over 80% of people living in urban areas exposed to air quality levels that exceed World Health Organisation limits. In this session, I present a multi-resolution multi-task framework that handles evidence integration under varying spatio-temporal sampling resolution and noise levels. We have developed both shallow Gaussian Process (GP) mixture models and deep GP constructions that naturally handle this evidence integration, as well as biases in the mean. These models underpin our work at the Alan Turing Institute towards providing spatio-temporal forecasts of air pollution across London. We demonstrate the effectiveness of our framework on both synthetic examples and applications on London air quality. For further information go to: https://www.turing.ac.uk/research/research-projects/london-air-quality. Collaborators: Oliver Hamelijnck, Theodoros Damoulas, Kangrui Wang and Mark Girolami.

SeminarNeuroscienceRecording

Cortical-like dynamics in recurrent circuits optimized for sampling-based probabilistic inference

Máté Lengyel
University of Cambridge
Jun 7, 2020

Sensory cortices display a suite of ubiquitous dynamical features, such as ongoing noise variability, transient overshoots, and oscillations, that have so far escaped a common, principled theoretical account. We developed a unifying model for these phenomena by training a recurrent excitatory-inhibitory neural circuit model of a visual cortical hypercolumn to perform sampling-based probabilistic inference. The optimized network displayed several key biological properties, including divisive normalization, as well as stimulus-modulated noise variability, inhibition-dominated transients at stimulus onset, and strong gamma oscillations. These dynamical features had distinct functional roles in speeding up inferences and made predictions that we confirmed in novel analyses of awake monkey recordings. Our results suggest that the basic motifs of cortical dynamics emerge as a consequence of the efficient implementation of the same computational function — fast sampling-based inference — and predict further properties of these motifs that can be tested in future experiments

SeminarNeuroscience

High precision coding in visual cortex

Carsen Stringer
HHMI Janelia Research Campus
Jun 3, 2020

Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.

ePoster

Quantifying the signal and noise of decision processes during dual tasks with an efficient two-dimensional drift-diffusion model

Kyungmi Noh, Yul Kang

Bernstein Conference 2024

ePoster

Dissecting emergent network noise compensation mechanisms in working memory tasks

COSYNE 2022

ePoster

Evaluating Noise Tolerance in Drosophila Vision

COSYNE 2022

ePoster

Inferring implicit sensorimotor costs by inverse optimal control with signal dependent noise

COSYNE 2022

ePoster

Inferring implicit sensorimotor costs by inverse optimal control with signal dependent noise

COSYNE 2022

ePoster

Multiple bumps can enhance robustness to noise in continuous attractor networks

COSYNE 2022

ePoster

Multiple bumps can enhance robustness to noise in continuous attractor networks

COSYNE 2022

ePoster

Sharing weights with noise-canceling anti-Hebbian plasticity

COSYNE 2022

ePoster

Sharing weights with noise-canceling anti-Hebbian plasticity

COSYNE 2022

ePoster

Computational mechanisms underlying thalamic regulation of prefrontal signal-to-noise ratio in decision making

Zhe Chen, Xiaohan Zhang, Michael Halassa

COSYNE 2023

ePoster

Dissecting cortical and subcortical contributions to perception with white noise optogenetic inhibition

Jackson Cone, Autumn Mitchell, Rachel Parker, John Maunsell

COSYNE 2023

ePoster

A generalized Weber’s law reveals behaviorally limiting slow noise in evidence accumulation

Victoria Shavina, Alex Pouget, Valerio Mante

COSYNE 2023

ePoster

maskNMF: a denoise-sparsen-detect pipeline for demixing dense imaging data faster than real time

Amol Pasarkar, Liam Paninski, Pengcheng Zhou, Melissa Wu, Ian Kinsella, Daisong Pan, Jiang Lan Fan, Zhen Wang, Lamiae Abdeladim, Darcy Peterka, Hillel Adesnik, Na Ji

COSYNE 2023

ePoster

Predictive dynamics improve noise robustness in a deep network model of the human auditory system

Ching Fang, Erica Shook, Justin Buck, Guillermo Horga

COSYNE 2023

ePoster

Representational drift leads to sparse activity solutions that are robust to noise and learning

Maanasa Natrajan & James Fitzgerald

COSYNE 2023

ePoster

Correlated Excitatory \& Inhibitory Noise Mitigates Hebbian Synaptic Drift

Michelle Miller, Christoph Miehl, Brent Doiron

COSYNE 2025

ePoster

Uncertainty Calibration through Pretraining with Random Noise

Jeonghwan Cheon, Se-Bum Paik

COSYNE 2025

ePoster

Visual coding improves over development by refinement of noise amplitude rather than noise shape

Robert Wong, Naoki Hiratani, Geoffrey Goodhill

COSYNE 2025

ePoster

Effects of high-intensity white noise on short-term and long-term memory in male and female rats

Nino Pochkhidze, Mzia Zhvania, Nadezhda Japaridze, Giorgi Lobzanidze

FENS Forum 2024

ePoster

Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods

Dániel Terbe, Balázs Szabó, Szabolcs Káli

FENS Forum 2024

ePoster

Impact of background noise on visual search performance

Kyriakos Nikolaidis, Hubert H. Kerschbaum

FENS Forum 2024

ePoster

Implications of synaptic noise on rate coding and temporal coding in the lateral superior olive: A dynamic-clamp study

Jonas Fisch, Eckhard Friauf

FENS Forum 2024

ePoster

Can inferior colliculus neurons predict behavioral auditory discrimination in noise?

Alexandra Martin, Chloé Huetz, Jean-Marc Edeline

FENS Forum 2024

ePoster

Metabolic neural constraints provide resilience to noise in feed-forward networks

Ivan Bulygin, Chaitanya Chintaluri, Tim P. Vogels

FENS Forum 2024

ePoster

The neural processing of natural audiovisual speech in noise in autism: A TRF approach

Theo Vanneau, Michael Crosse, John Foxe, Sophie Molholm

FENS Forum 2024

ePoster

What happens to the neural code of consonants when presented in background noise

Amarins Heeringa, Christine Köppl

FENS Forum 2024