TopicNeuro

variability

49 Seminars34 ePosters

Latest

SeminarNeuroscience

Decoding stress vulnerability

Stamatina Tzanoulinou
University of Lausanne, Faculty of Biology and Medicine, Department of Biomedical Sciences
Feb 20, 2026

Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 10, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 23, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Dimensionality reduction beyond neural subspaces

Alex Cayco Gajic
École Normale Supérieure
Jan 29, 2025

Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.

SeminarNeuroscienceRecording

Currents of Hope: how noninvasive brain stimulation is reshaping modern psychiatric care; Adapting to diversity: Integrating variability in brain structure and function into personalized / closed-loop non-invasive brain stimulation for substance use disorders

Colleen Hanlon, PhD & Ghazaleh Soleimani, PhD
Brainsway / University of Minnesota
Mar 28, 2024

In March we will focus on TMS and host Ghazaleh Soleimani and Colleen Hanlon. The talks will talk place on Thursday, March 28th at noon ET – please be aware that this means 5PM CET since Boston already switched to summer time! Ghazaleh Soleimani, PhD, is a postdoctoral fellow in Dr Hamed Ekhtiari’s lab at the University of Minnesota. She is also the executive director of the International Network of tES/TMS for Addiction Medicine (INTAM). She will discuss “Adapting to diversity: Integrating variability in brain structure and function into personalized / closed-loop non-invasive brain stimulation for substance use disorders”. Colleen Hanlon, PhD, currently serves as a Vice President of Medical Affairs for BrainsWay, a company specializing in medical devices for mental health, including TMS. Colleen previously worked at the Medical University of South Carolina and Wake Forest School of Medicine. She received the International Brain Stimulation Early Career Award in 2023. She will discuss “Currents of Hope: how noninvasive brain stimulation is reshaping modern psychiatric care”. As always, we will also get a glimpse at the “Person behind the science”. Please register va talks.stimulatingbrains.org to receive the (free) Zoom link, subscribe to our newsletter, or follow us on Twitter/X for further updates!

SeminarNeuroscienceRecording

Imaging the subcortex; Microstructural and connectivity correlates of outcome variability in functional neurosurgery for movement disorders

Birte Forstmann, PhD & Francisca Ferreira, PhD
University of Amsterdam, Netherlands / University College London, UK
Dec 14, 2023

We are very much looking forward to host Francisca Ferreira and Birte Forstmann on December 14th, 2023, at noon ET / 6PM CET. Francisca Ferreira is a PhD student and Neurosurgery trainee at the University College of London Queen Square Institute of Neurology and a Royal College of Surgeons “Emerging Leaders” program laureate. Her presentation title will be: “Microstructural and connectivity correlates of outcome variability in functional neurosurgery for movement disorders”. Birte Forstmann, PhD, is the Director of the Amsterdam Brain and Cognition Center, a Professor of Cognitive Neuroscience at the University of Amsterdam, and a Professor by Special Appointment of Neuroscientific Testing of Psychological Models at the University of Leiden. Besides her scientific presentation (“Imaging the human subcortex”), she will give us a glimpse at the “Person behind the science”. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!

SeminarNeuroscienceRecording

Tracking subjects' strategies in behavioural choice experiments at trial resolution

Mark Humphries
University of Nottingham
Dec 7, 2023

Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.

SeminarNeuroscience

Sleep deprivation and the human brain: from brain physiology to cognition”

Ali Salehinejad
Leibniz Research Centre for Working Environment & Human Factors, Dortmund, Germany
Aug 29, 2023

Sleep strongly affects synaptic strength, making it critical for cognition, especially learning and memory formation. Whether and how sleep deprivation modulates human brain physiology and cognition is poorly understood. Here we examined how overnight sleep deprivation vs overnight sufficient sleep affects (a) cortical excitability, measured by transcranial magnetic stimulation, (b) inducibility of long-term potentiation (LTP)- and long-term depression (LTD)-like plasticity via transcranial direct current stimulation (tDCS), and (c) learning, memory, and attention. We found that sleep deprivation increases cortical excitability due to enhanced glutamate-related cortical facilitation and decreases and/or reverses GABAergic cortical inhibition. Furthermore, tDCS-induced LTP-like plasticity (anodal) abolishes while the inhibitory LTD-like plasticity (cathodal) converts to excitatory LTP-like plasticity under sleep deprivation. This is associated with increased EEG theta oscillations due to sleep pressure. Motor learning, behavioral counterparts of plasticity, and working memory and attention, which rely on cortical excitability, are also impaired during sleep deprivation. Our study indicates that upscaled brain excitability and altered plasticity, due to sleep deprivation, are associated with impaired cognitive performance. Besides showing how brain physiology and cognition undergo changes (from neurophysiology to higher-order cognition) under sleep pressure, the findings have implications for variability and optimal application of noninvasive brain stimulation.

SeminarNeuroscience

A recurrent network model of planning explains hippocampal replay and human behavior

Guillaume Hennequin
University of Cambridge, UK
May 31, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.

SeminarNeuroscienceRecording

Dynamics of cortical circuits: underlying mechanisms and computational implications

Alessandro Sanzeni
Bocconi University, Milano
Jan 25, 2023

A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 4, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscienceRecording

Online Training of Spiking Recurrent Neural Networks​ With Memristive Synapses

Yigit Demirag
Institute of Neuroinformatics
Jul 6, 2022

Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.

SeminarNeuroscience

Extrinsic control and autonomous computation in the hippocampal CA1 circuit

Ipshita Zutshi
NYU
Apr 27, 2022

In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.

SeminarNeuroscience

Inter-individual variability in reward seeking and decision making: role of social life and consequence for vulnerability to nicotine

Philippe Faure
Neurophysiology and Behavior , Sorbonne University, Paris
Apr 7, 2022

Inter-individual variability refers to differences in the expression of behaviors between members of a population. For instance, some individuals take greater risks, are more attracted to immediate gains or are more susceptible to drugs of abuse than others. To probe the neural bases of inter-individual variability  we study reward seeking and decision-making in mice, and dissect the specific role of dopamine in the modulation of these behaviors. Using a spatial version of the multi-armed bandit task, in which mice are faced with consecutive binary choices, we could link modifications of midbrain dopamine cell dynamics with modulation of exploratory behaviors, a major component of individual characteristics in mice. By analyzing mouse behaviors in semi-naturalistic environments, we then explored the role of social relationships in the shaping of dopamine activity and associated beahviors. I will present recent data from the laboratory suggesting that changes in the activity of dopaminergic networks link social influences with variations in the expression of non-social behaviors: by acting on the dopamine system, the social context may indeed affect the capacity of individuals to make decisions, as well as their vulnerability to drugs of abuse, in particular nicotine.

SeminarNeuroscienceRecording

Probabilistic computation in natural vision

Ruben Coen-Cagli
Albert Einstein College of Medicine
Mar 30, 2022

A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 23, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

Dynamic dopaminergic signaling probabilistically controls the timing of self-timed movements

Allison Hamilos
Assad Lab, Harvard University
Feb 23, 2022

Human movement disorders and pharmacological studies have long suggested molecular dopamine modulates the pace of the internal clock. But how does the endogenous dopaminergic system influence the timing of our movements? We examined the relationship between dopaminergic signaling and the timing of reward-related, self-timed movements in mice. Animals were trained to initiate licking after a self-timed interval following a start cue; reward was delivered if the animal’s first lick fell within a rewarded window (3.3-7 s). The first-lick timing distributions exhibited the scalar property, and we leveraged the considerable variability in these distributions to determine how the activity of the dopaminergic system related to the animals’ timing. Surprisingly, dopaminergic signals ramped-up over seconds between the start-timing cue and the self-timed movement, with variable dynamics that predicted the movement/reward time, even on single trials. Steeply rising signals preceded early initiation, whereas slowly rising signals preceded later initiation. Higher baseline signals also predicted earlier self-timed movement. Optogenetic activation of dopamine neurons during self-timing did not trigger immediate movements, but rather caused systematic early-shifting of the timing distribution, whereas inhibition caused late-shifting, as if dopaminergic manipulation modulated the moment-to-moment probability of unleashing the planned movement. Consistent with this view, the dynamics of the endogenous dopaminergic signals quantitatively predicted the moment-by-moment probability of movement initiation. We conclude that ramping dopaminergic signals, potentially encoding dynamic reward expectation, probabilistically modulate the moment-by-moment decision of when to move. (Based on work from Hamilos et al., eLife, 2021).

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 2, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Short Talk: An optogenetic theory of stimulation near criticality

Brandon Benson
Stanford University
Dec 1, 2021

Recent advances in optogenetics allow for stimulation of neurons with sub-millisecond spike jitter and single neuron selectivity. Already this precision has revealed new levels of cortical sensitivity: stimulating tens of neurons can yield changes in the mean firing rate of thousands of similarly tuned neurons. This extreme sensitivity suggests that cortical dynamics are near criticality. Criticality is often studied in neural systems as a non-equilibrium thermodynamic process in which scale-free patterns of activity, called avalanches, emerge between distinct states of spontaneous activity. While criticality is well studied, it is still unclear what these distinct states of spontaneous activity are and what responses we expect from stimulation of this activity. By answering these questions, optogenetic stimulation will become a new avenue for approaching criticality and understanding cortical dynamics. Here, for the first time, we study the effects of optogenetic-like stimulation on a model near criticality. We study a model of Inhibitory/Excitatory (I/E) Leaky Integrate and Fire (LIF) spiking neurons which display a region of high sensitivity as seen in experiments. We find that this region of sensitivity is, indeed, near criticality. We derive the Dynamic Mean Field Theory of this model and find that the distinct states of activity are asynchrony and synchrony. We use our theory to characterize response to various types and strengths of optogenetic stimulation. Our model and theory predict that asynchronous, near-critical dynamics can have two qualitatively different responses to stimulation: one characterized by high sensitivity, discrete event responses, and high trial-to-trial variability, and another characterized by low sensitivity, continuous responses with characteristic frequencies, and low trial-to-trial variability. While both response types may be considered near-critical in model space, networks which are closest to criticality show a hybrid of these response effects.

SeminarNeuroscienceRecording

Timing errors and decision making

Fuat Balci
University of Manitoba
Nov 30, 2021

Error monitoring refers to the ability to monitor one's own task performance without explicit feedback. This ability is studied typically in two-alternative forced-choice (2AFC) paradigms. Recent research showed that humans can also keep track of the magnitude and direction of errors in different magnitude domains (e.g., numerosity, duration, length). Based on the evidence that suggests a shared mechanism for magnitude representations, we aimed to investigate whether metric error monitoring ability is commonly governed across different magnitude domains. Participants reproduced/estimated temporal, numerical, and spatial magnitudes after which they rated their confidence regarding first order task performance and judged the direction of their reproduction/estimation errors. Participants were also tested in a 2AFC perceptual decision task and provided confidence ratings regarding their decisions. Results showed that variability in reproductions/estimations and metric error monitoring ability, as measured by combining confidence and error direction judgements, were positively related across temporal, spatial, and numerical domains. Metacognitive sensitivity in these metric domains was also positively associated with each other but not with metacognitive sensitivity in the 2AFC perceptual decision task. In conclusion, the current findings point at a general metric error monitoring ability that is shared across different metric domains with limited generalizability to perceptual decision-making.

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 27, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

Rastermap: Extracting structure from high dimensional neural data

Carsen Stringer
HHMI, Janelia Research Campus
Oct 27, 2021

Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

SeminarNeuroscience

Inclusive Basic Research

Dr Simone Badal and Dr Natasha Karp
University of the West Indies, Astra Zeneca
Jun 9, 2021

Methodology for understanding the basic phenomena of life can be done in vitro or in vivo, under tightly-controlled experimental conditions designed to limit variability. However stringent the protocol, these experiments do not occur in a cultural vacuum and they are often subject to the same societal biases as other research disciplines. Many researchers uphold the status quo of biased basic research by not questioning the characteristics of their experimental animals, or the people from whom their tissue samples were collected. This means that our fundamental understanding of life has been built on biased models. This session will explore the ways in which basic life sciences research can be biased and the implications of this. We will discuss practical ways to assess your research design and how to make sure it is representative.

SeminarNeuroscience

Neural correlates of cognitive control across the adult lifespan

Cheryl Grady
May 27, 2021

Cognitive control involves the flexible allocation of mental resources during goal-directed behaviour and comprises three correlated but distinct domains—inhibition, task shifting, and working memory. Healthy ageing is characterised by reduced cognitive control. Professor Cheryl Grady and her team have been studying the influence of age differences in large-scale brain networks on the three control processes in a sample of adults from 20 to 86 years of age. In this webinar, Professor Cheryl Grady will describe three aspects of this work: 1) age-related dedifferentiation and reconfiguration of brain networks across the sub-domains 2) individual differences in the relation of task-related activity to age, structural integrity and task performance for each sub-domain 3) modulation of brain signal variability as a function of cognitive load and age during working memory. This research highlights the reduction in dynamic range of network activity that occurs with ageing and how this contributes to age differences in cognitive control. Cheryl Grady is a senior scientist at the Rotman Research Institute at Baycrest, and Professor in the departments of Psychiatry and Psychology at the University of Toronto. She held the Canada Research Chair in Neurocognitive Aging from 2005-2018 and was elected as a Fellow of the Royal Society of Canada in 2019. Her research uses MRI to determine the role of brain network connectivity in cognitive ageing.

SeminarNeuroscience

Psychological mechanisms and functions of 5-HT and SSRIs in potential therapeutic change: Lessons from the serotonergic modulation of action selection, learning, affect, and social cognition

Clark Roberts
University of Cambridge, Department of Psychology
May 26, 2021

Uncertainty regarding which psychological mechanisms are fundamental in mediating SSRI treatment outcomes and wide-ranging variability in their efficacy has raised more questions than it has solved. Since subjective mood states are an abstract scientific construct, only available through self-report in humans, and likely involving input from multiple top-down and bottom-up signals, it has been difficult to model at what level SSRIs interact with this process. Converging translational evidence indicates a role for serotonin in modulating context-dependent parameters of action selection, affect, and social cognition; and concurrently supporting learning mechanisms, which promote adaptability and behavioural flexibility. We examine the theoretical basis, ecological validity, and interaction of these constructs and how they may or may not exert a clinical benefit. Specifically, we bridge crucial gaps between disparate lines of research, particularly findings from animal models and human clinical trials, which often seem to present irreconcilable differences. In determining how SSRIs exert their effects, our approach examines the endogenous functions of 5-HT neurons, how 5-HT manipulations affect behaviour in different contexts, and how their therapeutic effects may be exerted in humans – which may illuminate issues of translational models, hierarchical mechanisms, idiographic variables, and social cognition.

SeminarNeuroscienceRecording

Neuronal variability and spatiotemporal dynamics in cortical network models

Chengcheng Huang
University of Pittsburgh
May 19, 2021

Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability. When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise.

SeminarNeuroscienceRecording

Variability, maintenance and learning in birdsong

Adrienne Fairhall
University of Washington
Mar 31, 2021

The songbird zebra finch is an exemplary model system in which to study trial-and-error learning, as the bird learns its single song gradually through the production of many noisy renditions. It is also a good system in which to study the maintenance of motor skills, as the adult bird actively maintains its song and retains some residual plasticity. Motor learning occurs through the association of timing within the song, represented by sparse firing in nucleus HVC, with motor output, driven by nucleus RA. Here we show through modeling that the small level of observed variability in HVC can result in a network which is more easily able to adapt to change, and is most robust to cell damage or death, than an unperturbed network. In collaboration with Carlos Lois’ lab, we also consider the effect of directly perturbing HVC through viral injection of toxins that affect the firing of projection neurons. Following these perturbations, the song is profoundly affected but is able to almost perfectly recover. We characterize the changes in song acoustics and syntax, and propose models for HVC architecture and plasticity that can account for some of the observed effects. Finally, we suggest a potential role for inputs from nucleus Uva in helping to control timing precision in HVC.

SeminarNeuroscience

Neural circuit parameter variability, robustness, and homeostasis

Astrid Prinz
Emory University
Mar 12, 2021

Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.

SeminarNeuroscience

How to combine brain stimulation with neuroimaging: "Concurrent tES-fMRI

Charlotte Stag, Lucia Li, Axel Thielscher, Zeinab Esmaeilpour, Danny Wang, Michael Nitsche, Til Ole Bergmann, ...
University of Oxford, University of Imperial College London, ...
Feb 4, 2021

Transcranial electrical stimulation (tES) techniques, including transcranial alternating and direct current stimulation (tACS and tDCS), are non-invasive brain stimulation technologies increasingly used for modulation of targeted neural and cognitive processes. Integration of tES with human functional magnetic resonance imaging (fMRI) provides a novel avenue in human brain mapping for investigating the neural mechanisms underlying tES. Advances in the field of tES-fMRI can be hampered by the methodological variability between studies that confounds comparability/replicability. To address the technical/methodological details and to propose a new framework for future research, the scientific international network of tES-fMRI (INTF) was founded with two main aims: • To foster scientific exchange between researchers for sharing ideas, exchanging experiences, and publishing consensus articles; • To implement the joint studies through a continuing dialogue with the institutes across the globe. The network organized three international scientific webinars, in which considerable heterogeneities of technical/methodological aspects in studies combining tES with fMRI were discussed along with strategies to help to bridge respective knowledge gaps, and distributes newsletters that are sent regularly to the network members from the Twitter and LinkedIn accounts.

SeminarNeuroscienceRecording

Cellular mechanisms behind stimulus evoked quenching of variability

Brent Doiron
University of Chicago
Jan 27, 2021

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

SeminarNeuroscienceRecording

The emergence and modulation of time in neural circuits and behavior

Luca Mazzucato
University of Oregon
Jan 22, 2021

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior and cortical activity. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times can be accelerated or slowed down via gain modulation, flexibly induced by neuromodulation or perturbations; and how gain modulation may control response timing in the visual cortex. Finally, we will present a new biologically plausible way to generate a reservoir of multiple timescales in cortical circuits.

SeminarNeuroscienceRecording

High precision coding in visual cortex

Carsen Stringer
Janelia
Jan 8, 2021

Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.

SeminarNeuroscienceRecording

The emergence and modulation of time in neural circuits and behavior

Luca Mazzucato
University of Oregon
Nov 25, 2020

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.

SeminarNeuroscienceRecording

Dimensions of variability in circuit models of cortex

Brent Doiron
The University of Chicago
Nov 16, 2020

Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.

SeminarNeuroscience

Rapid State Changes Account for Apparent Brain and Behavior Variability

David McCormick
University of Oregon
Sep 17, 2020

Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.

SeminarNeuroscience

Cortical population coding of consumption decisions

Donald B. Katz
Brandeis University
Jun 30, 2020

The moment that a tasty substance enters an animal’s mouth, the clock starts ticking. Taste information transduced on the tongue signals whether a potential food will nourish or poison, and the animal must therefore use this information quickly if it is to decide whether the food should be swallowed or expelled. The system tasked with computing this important decision is rife with cross-talk and feedback—circuitry that all but ensures dynamics and between-neuron coupling in neural responses to tastes. In fact, cortical taste responses, rather than simply reporting individual taste identities, do contain characterizable dynamics: taste-driven firing first reflects the substance’s presence on the tongue, and then broadly codes taste quality, and then shifts again to correlate with the taste’s current palatability—the basis of consumption decisions—all across the 1-1.5 seconds after taste administration. Ensemble analyses reveal the onset of palatability-related firing to be a sudden, nonlinear transition happening in many neurons simultaneously, such that it can be reliably detected in single trials. This transition faithfully predicts both the nature and timing of consumption behaviours, despite the huge trial-to-trial variability in both; furthermore, perturbations of this transition interfere with production of the behaviours. These results demonstrate the specific importance of ensemble dynamics in the generation of behaviour, and reveal the taste system to be akin to a range of other integrated sensorimotor systems.

SeminarNeuroscienceRecording

Cortical-like dynamics in recurrent circuits optimized for sampling-based probabilistic inference

Máté Lengyel
University of Cambridge
Jun 8, 2020

Sensory cortices display a suite of ubiquitous dynamical features, such as ongoing noise variability, transient overshoots, and oscillations, that have so far escaped a common, principled theoretical account. We developed a unifying model for these phenomena by training a recurrent excitatory-inhibitory neural circuit model of a visual cortical hypercolumn to perform sampling-based probabilistic inference. The optimized network displayed several key biological properties, including divisive normalization, as well as stimulus-modulated noise variability, inhibition-dominated transients at stimulus onset, and strong gamma oscillations. These dynamical features had distinct functional roles in speeding up inferences and made predictions that we confirmed in novel analyses of awake monkey recordings. Our results suggest that the basic motifs of cortical dynamics emerge as a consequence of the efficient implementation of the same computational function — fast sampling-based inference — and predict further properties of these motifs that can be tested in future experiments

SeminarNeuroscience

High precision coding in visual cortex

Carsen Stringer
HHMI Janelia Research Campus
Jun 4, 2020

Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.

ePosterNeuroscience

Response variability can accelerate learning in feedforward-recurrent networks

Sigrid Trägenap, Matthias Kaschube

Bernstein Conference 2024

ePosterNeuroscience

Variability in Self-Organizing Networks of Neurons: Between Chance and Design

Samora Okujeni, Ulrich Egert

Bernstein Conference 2024

ePosterNeuroscience

Affine models explain tuning-dependent correlated variability within and between V1 and V2

Ji Xia,Ken Miller

COSYNE 2022

ePosterNeuroscience

A brain-computer interface in prefrontal cortex that suppresses neural variability

Ryan Williamson,Akash Umakantha,Chris Ki,Byron Yu,Matthew Smith

COSYNE 2022

ePosterNeuroscience

Diverse covariates modulate neural variability: a widespread (sub)cortical phenomenon

David Liu,Theoklitos Amvrosiadis,Nathalie Rochefort,Máté Lengyel

COSYNE 2022

ePosterNeuroscience

Experience early in auditory conditioning impacts across-animal variability in neural tuning

Kathleen Martin,Colin Bredenberg,Cristina Savin,Jordan Lei,Eero Simoncelli,Robert Froemke

COSYNE 2022

ePosterNeuroscience

Natural scene expectation shapes the structure of trial to trial variability in mid-level visual cortex

Patricia Stan,Matthew Smith

COSYNE 2022

ePosterNeuroscience

Natural scene expectation shapes the structure of trial to trial variability in mid-level visual cortex

Patricia Stan,Matthew Smith

COSYNE 2022

ePosterNeuroscience

Relating Divisive Normalization to Modulation of Correlated Variability in Primary Visual Cortex

Oren Weiss,Hayley Bounds,Hillel Adesnik,Ruben Coen-Cagli

COSYNE 2022

ePosterNeuroscience

Relating Divisive Normalization to Modulation of Correlated Variability in Primary Visual Cortex

Oren Weiss,Hayley Bounds,Hillel Adesnik,Ruben Coen-Cagli

COSYNE 2022

ePosterNeuroscience

Sensory specific modulation of neural variability facilitates perceptual inference

Hyeyoung Shin,Hillel Adesnik

COSYNE 2022

ePosterNeuroscience

Sensory specific modulation of neural variability facilitates perceptual inference

Hyeyoung Shin,Hillel Adesnik

COSYNE 2022

ePosterNeuroscience

Single cell measures of tuning to imagined position during replay show preserved spatial tuning but quenched neural variability in place cells.

John Widloski,Matt Kleinman,David Foster

COSYNE 2022

ePosterNeuroscience

Single cell measures of tuning to imagined position during replay show preserved spatial tuning but quenched neural variability in place cells.

John Widloski,Matt Kleinman,David Foster

COSYNE 2022

ePosterNeuroscience

Accounting for visual cortex variability with distributed neural activity states

Anna Li, Ziyu Lu, J. Nathan Kutz, Eric Shea-Brown, Nicholas Steinmetz

COSYNE 2023

ePosterNeuroscience

Conductance Based Integrate and Fire Model with Correlated Inputs Captures Neural Variability

Logan Becker, Thibaud Taillefumier, Nicholas Priebe, Eyal Seidemannn, Baowang Li

COSYNE 2023

ePosterNeuroscience

Decoding momentary gain variability from neuronal populations

Corey M Ziemba, Zoe Boundy-Singer, Robbe Goris

COSYNE 2023

ePosterNeuroscience

Inter-animal variability in learning depends on transfer of pre-task experience via the hippocampus

Cristofer Holobetz, Zhuonan Yang, Greer Williams, Shrabasti Jana, David Kastner

COSYNE 2023

ePosterNeuroscience

Optimal control under uncertainty predicts variability in human navigation behavior

Fabian Kessler, Julia Frankenstein, Constantin Rothkopf

COSYNE 2023

ePosterNeuroscience

Balanced two-photon holographic bidirectional optogenetics defines the mechanism for stimulus quenching of neural variability

Kevin Sit, Brent Doiron, Chengcheng Huang, Hillel Adesnik

COSYNE 2025

ePosterNeuroscience

Inter-individual Variability in Primate Inferior Temporal Cortex Representations: Insights from Macaque Neural Responses and Artificial Neural Networks

Kohitij Kar, James DiCarlo

COSYNE 2025

ePosterNeuroscience

Neural sampling in a balanced spiking network with internally generated variability

Xinruo Yang, Wenhao Zhang, Brent Doiron

COSYNE 2025

ePosterNeuroscience

The Role of Neural Variability in Supporting Few-shot Generalization in Cortex

Praveen Venkatesh, Jiaqi Shang, Corbett Bennett, Sam Gale, Greggory Heller, Tamina Ramirez, Severine Durand, Eric Shea-Brown, Shawn Olsen, Stefan Mihalas

COSYNE 2025

ePosterNeuroscience

Age-related changes in neural variability in a decision-making task

Fenying Zang, Anup Khanal, Sonja Förster, International Brain Laboratory, Anne K Churchland, Anne E Urai

FENS Forum 2024

ePosterNeuroscience

Basal ganglia pathways for regulating motor skill variability

Sophie Elvig, Oluwatomiwa Oladunni, Steffen Wolff

FENS Forum 2024

ePosterNeuroscience

Controlling morpho-electrophysiological variability of neurons with detailed biophysical models

Alexis Arnaudon, Maria Reva, Mickael Zbili, Henry Markarm, Werner Van Geit, Lida Kanari

FENS Forum 2024

ePosterNeuroscience

Exploring the variability and functional implications of axon initial segment morphology in hippocampal neurons

Christian Thome, Nikolas Stevens, Juri Monath, Andreas Draguhn, Maren Engelhardt*, Martin Both*

FENS Forum 2024

ePosterNeuroscience

The influence of pulse shape and current direction of TMS on test-retest reliability and variability of single pulse TMS protocols

Desmond Agboada, Roman Rethwilm, Wolfgang Seiberl, Wolfgang Mack

FENS Forum 2024

ePosterNeuroscience

Interindividual variability of neuronal connectivity and function in zebrafish olfactory bulb

Ruth Eneida Montano Crespo, Alexandra Graff Meyer, Tomáš Gancarčik, Nila R. Mönig, Michal Januszewski, Bo Hu, Nesibe Z. Temiz, Rainer W. Friedrich

FENS Forum 2024

ePosterNeuroscience

Mapping individual variability in the pituitary gland: A new volumetric atlas

Fabien Schneider, Manel Merabet, Jérôme Redouté, Nicolas Costes, Claire Boutet, Germain Natacha, Bogdan Galusca

FENS Forum 2024

ePosterNeuroscience

No two mice alike: Leveraging inter-individual variability in threat conditioning of inbred mice to model trait anxiety

Irina Kovlyagina, Anna Wierczeiko, Hristo Todorov, Eric Jacobi, Margarita Tevosian, Jakob von Engelhardt, Susanne Gerber, Beat Lutz

FENS Forum 2024

ePosterNeuroscience

Reaction time variability in a delayed memory saccade task replicated by a recurrent neural network model

Roger Herikstad, Camilo Libedinsky

FENS Forum 2024

ePosterNeuroscience

Spatial and topological variability of dendritic morphology in the motion detection pathway of Drosophila melanogaster

Nikolas Drummond, Alexander Borst

FENS Forum 2024

ePosterNeuroscience

The variability of spectro-laminar beta rhythm patterns in macaque motor cortex reflects task and behavioral parameters

Laura López-Galdo, Simon Nougaret, Demian Battaglia, Bjørg Elisabeth Kilavik

FENS Forum 2024

variability coverage

83 items

Seminar49
ePoster34
Domain spotlight

Explore how variability research is advancing inside Neuro.

Visit domain