← Back

Variability

Topic spotlight
TopicWorld Wide

variability

Discover seminars, jobs, and research tagged with variability across World Wide.
92 curated items57 Seminars34 ePosters1 Position
Updated in 2 months
92 items · variability
92 results
SeminarNeuroscience

Decoding stress vulnerability

Stamatina Tzanoulinou
University of Lausanne, Faculty of Biology and Medicine, Department of Biomedical Sciences
Feb 19, 2026

Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

Position

Alex Pitti

ETIS Lab (CNRS, CY Cergy-Paris University, ENSEA)
ETIS Lab (CNRS, CY Cergy-Paris University, ENSEA)
Dec 5, 2025

This PhD is funded by the French ANR, under a 4 years' project on Sensorimotor integration of variability during birdsong learning. The applicant will develop an artificial neural model, developmental and brain-inspired, to learn the sound structure in real time and without explicit supervision. Until now, AI models for developmental learning of vocalizations have been solely validated by comparison against a human-annotated corpus and not yet via direct sensorimotor interactions with living animals. We expect to do so with an interactive robot under the framework of active inference and predictive coding.

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Dimensionality reduction beyond neural subspaces

Alex Cayco Gajic
École Normale Supérieure
Jan 28, 2025

Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.

SeminarNeuroscienceRecording

Currents of Hope: how noninvasive brain stimulation is reshaping modern psychiatric care; Adapting to diversity: Integrating variability in brain structure and function into personalized / closed-loop non-invasive brain stimulation for substance use disorders

Colleen Hanlon, PhD & Ghazaleh Soleimani, PhD
Brainsway / University of Minnesota
Mar 27, 2024

In March we will focus on TMS and host Ghazaleh Soleimani and Colleen Hanlon. The talks will talk place on Thursday, March 28th at noon ET – please be aware that this means 5PM CET since Boston already switched to summer time! Ghazaleh Soleimani, PhD, is a postdoctoral fellow in Dr Hamed Ekhtiari’s lab at the University of Minnesota. She is also the executive director of the International Network of tES/TMS for Addiction Medicine (INTAM). She will discuss “Adapting to diversity: Integrating variability in brain structure and function into personalized / closed-loop non-invasive brain stimulation for substance use disorders”. Colleen Hanlon, PhD, currently serves as a Vice President of Medical Affairs for BrainsWay, a company specializing in medical devices for mental health, including TMS. Colleen previously worked at the Medical University of South Carolina and Wake Forest School of Medicine. She received the International Brain Stimulation Early Career Award in 2023. She will discuss “Currents of Hope: how noninvasive brain stimulation is reshaping modern psychiatric care”. As always, we will also get a glimpse at the “Person behind the science”. Please register va talks.stimulatingbrains.org to receive the (free) Zoom link, subscribe to our newsletter, or follow us on Twitter/X for further updates!

SeminarNeuroscienceRecording

Imaging the subcortex; Microstructural and connectivity correlates of outcome variability in functional neurosurgery for movement disorders

Birte Forstmann, PhD & Francisca Ferreira, PhD
University of Amsterdam, Netherlands / University College London, UK
Dec 13, 2023

We are very much looking forward to host Francisca Ferreira and Birte Forstmann on December 14th, 2023, at noon ET / 6PM CET. Francisca Ferreira is a PhD student and Neurosurgery trainee at the University College of London Queen Square Institute of Neurology and a Royal College of Surgeons “Emerging Leaders” program laureate. Her presentation title will be: “Microstructural and connectivity correlates of outcome variability in functional neurosurgery for movement disorders”. Birte Forstmann, PhD, is the Director of the Amsterdam Brain and Cognition Center, a Professor of Cognitive Neuroscience at the University of Amsterdam, and a Professor by Special Appointment of Neuroscientific Testing of Psychological Models at the University of Leiden. Besides her scientific presentation (“Imaging the human subcortex”), she will give us a glimpse at the “Person behind the science”. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!

SeminarNeuroscienceRecording

Tracking subjects' strategies in behavioural choice experiments at trial resolution

Mark Humphries
University of Nottingham
Dec 6, 2023

Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.

SeminarNeuroscience

Sleep deprivation and the human brain: from brain physiology to cognition”

Ali Salehinejad
Leibniz Research Centre for Working Environment & Human Factors, Dortmund, Germany
Aug 28, 2023

Sleep strongly affects synaptic strength, making it critical for cognition, especially learning and memory formation. Whether and how sleep deprivation modulates human brain physiology and cognition is poorly understood. Here we examined how overnight sleep deprivation vs overnight sufficient sleep affects (a) cortical excitability, measured by transcranial magnetic stimulation, (b) inducibility of long-term potentiation (LTP)- and long-term depression (LTD)-like plasticity via transcranial direct current stimulation (tDCS), and (c) learning, memory, and attention. We found that sleep deprivation increases cortical excitability due to enhanced glutamate-related cortical facilitation and decreases and/or reverses GABAergic cortical inhibition. Furthermore, tDCS-induced LTP-like plasticity (anodal) abolishes while the inhibitory LTD-like plasticity (cathodal) converts to excitatory LTP-like plasticity under sleep deprivation. This is associated with increased EEG theta oscillations due to sleep pressure. Motor learning, behavioral counterparts of plasticity, and working memory and attention, which rely on cortical excitability, are also impaired during sleep deprivation. Our study indicates that upscaled brain excitability and altered plasticity, due to sleep deprivation, are associated with impaired cognitive performance. Besides showing how brain physiology and cognition undergo changes (from neurophysiology to higher-order cognition) under sleep pressure, the findings have implications for variability and optimal application of noninvasive brain stimulation.

SeminarNeuroscience

A recurrent network model of planning explains hippocampal replay and human behavior

Guillaume Hennequin
University of Cambridge, UK
May 30, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.

SeminarCognition

Prosody in the voice, face, and hands changes which words you hear

Hans Rutger Bosker
Donders Institute of Radboud University
May 22, 2023

Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.

SeminarArtificial IntelligenceRecording

Computational models and experimental methods for the human cornea

Anna Pandolfi
Politecnico di Milano
May 1, 2023

The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.

SeminarNeuroscienceRecording

Dynamics of cortical circuits: underlying mechanisms and computational implications

Alessandro Sanzeni
Bocconi University, Milano
Jan 24, 2023

A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.

SeminarPsychology

Biological and experience-based trajectories in adolescent brain and cognitive development

Ilona Kovács
Pázmány Péter Catholic University & Eötvös Loránd University
Nov 7, 2022

Adolescent development is not only shaped by the mere passing of time and accumulating experience, but it also depends on pubertal timing and the cascade of maturational processes orchestrated by gonadal hormones. Although individual variability in puberty onset confounds adolescent studies, it has not been efficiently controlled for. Here we introduce ultrasonic bone age assessment to estimate biological maturity and disentangle the independent effects of chronological and biological age on adolescent cognitive abilities, emotional development, and brain maturation. Comparing cognitive performance of participants with different skeletal maturity we uncover the impact of biological age on both IQ and specific abilities. With respect to emotional development, we find narrow windows of highest vulnerability determined by biological age. In terms of neural development, we focus on the relevance of neural states unrelated to sensory stimulation, such as cortical activity during sleep and resting states, and we uncover a novel anterior-to-posterior pattern of human brain maturation. Based on our findings, bone age is a promising biomarker of adolescent maturity.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscienceRecording

Online Training of Spiking Recurrent Neural Networks​ With Memristive Synapses

Yigit Demirag
Institute of Neuroinformatics
Jul 5, 2022

Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.

SeminarNeuroscience

Extrinsic control and autonomous computation in the hippocampal CA1 circuit

Ipshita Zutshi
NYU
Apr 26, 2022

In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.

SeminarNeuroscience

Inter-individual variability in reward seeking and decision making: role of social life and consequence for vulnerability to nicotine

Philippe Faure
Neurophysiology and Behavior , Sorbonne University, Paris
Apr 6, 2022

Inter-individual variability refers to differences in the expression of behaviors between members of a population. For instance, some individuals take greater risks, are more attracted to immediate gains or are more susceptible to drugs of abuse than others. To probe the neural bases of inter-individual variability  we study reward seeking and decision-making in mice, and dissect the specific role of dopamine in the modulation of these behaviors. Using a spatial version of the multi-armed bandit task, in which mice are faced with consecutive binary choices, we could link modifications of midbrain dopamine cell dynamics with modulation of exploratory behaviors, a major component of individual characteristics in mice. By analyzing mouse behaviors in semi-naturalistic environments, we then explored the role of social relationships in the shaping of dopamine activity and associated beahviors. I will present recent data from the laboratory suggesting that changes in the activity of dopaminergic networks link social influences with variations in the expression of non-social behaviors: by acting on the dopamine system, the social context may indeed affect the capacity of individuals to make decisions, as well as their vulnerability to drugs of abuse, in particular nicotine.

SeminarNeuroscienceRecording

Probabilistic computation in natural vision

Ruben Coen-Cagli
Albert Einstein College of Medicine
Mar 29, 2022

A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.

SeminarPhysics of Life

Retinal neurogenesis and lamination: What to become, where to become it and how to move from there!

Caren Norden
Instituto Gulbenkian de Ciência
Mar 24, 2022

The vertebrate retina is an important outpost of the central nervous system, responsible for the perception and transmission of visual information. It consists of five different types of neurons that reproducibly laminate into three layers, a process of crucial importance for the organ’s function. Unsurprisingly, impaired fate decisions as well as impaired neuronal migrations and lamination lead to impaired retinal function. However, how processes are coordinated at the cellular and tissue level and how variable or robust retinal formation is, is currently still underexplored. In my lab, we aim to shed light on these questions from different angles, studying on the one hand differentiation phenomena and their variability and on the other hand the downstream migration and lamination phenomena. We use zebrafish as our main model system due to its excellent possibilities for live imaging and quantitative developmental biology. More recently we also started to use human retinal organoids as a comparative system. We further employ cross disciplinary approaches to address these issues combining work of cell and developmental biology, biomechanics, theory and computer science. Together, this allows us to integrate cell with tissue-wide phenomena and generate an appreciation of the reproducibility and variability of events.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

Dynamic dopaminergic signaling probabilistically controls the timing of self-timed movements

Allison Hamilos
Assad Lab, Harvard University
Feb 22, 2022

Human movement disorders and pharmacological studies have long suggested molecular dopamine modulates the pace of the internal clock. But how does the endogenous dopaminergic system influence the timing of our movements? We examined the relationship between dopaminergic signaling and the timing of reward-related, self-timed movements in mice. Animals were trained to initiate licking after a self-timed interval following a start cue; reward was delivered if the animal’s first lick fell within a rewarded window (3.3-7 s). The first-lick timing distributions exhibited the scalar property, and we leveraged the considerable variability in these distributions to determine how the activity of the dopaminergic system related to the animals’ timing. Surprisingly, dopaminergic signals ramped-up over seconds between the start-timing cue and the self-timed movement, with variable dynamics that predicted the movement/reward time, even on single trials. Steeply rising signals preceded early initiation, whereas slowly rising signals preceded later initiation. Higher baseline signals also predicted earlier self-timed movement. Optogenetic activation of dopamine neurons during self-timing did not trigger immediate movements, but rather caused systematic early-shifting of the timing distribution, whereas inhibition caused late-shifting, as if dopaminergic manipulation modulated the moment-to-moment probability of unleashing the planned movement. Consistent with this view, the dynamics of the endogenous dopaminergic signals quantitatively predicted the moment-by-moment probability of movement initiation. We conclude that ramping dopaminergic signals, potentially encoding dynamic reward expectation, probabilistically modulate the moment-by-moment decision of when to move. (Based on work from Hamilos et al., eLife, 2021).

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Short Talk: An optogenetic theory of stimulation near criticality

Brandon Benson
Stanford University
Dec 1, 2021

Recent advances in optogenetics allow for stimulation of neurons with sub-millisecond spike jitter and single neuron selectivity. Already this precision has revealed new levels of cortical sensitivity: stimulating tens of neurons can yield changes in the mean firing rate of thousands of similarly tuned neurons. This extreme sensitivity suggests that cortical dynamics are near criticality. Criticality is often studied in neural systems as a non-equilibrium thermodynamic process in which scale-free patterns of activity, called avalanches, emerge between distinct states of spontaneous activity. While criticality is well studied, it is still unclear what these distinct states of spontaneous activity are and what responses we expect from stimulation of this activity. By answering these questions, optogenetic stimulation will become a new avenue for approaching criticality and understanding cortical dynamics. Here, for the first time, we study the effects of optogenetic-like stimulation on a model near criticality. We study a model of Inhibitory/Excitatory (I/E) Leaky Integrate and Fire (LIF) spiking neurons which display a region of high sensitivity as seen in experiments. We find that this region of sensitivity is, indeed, near criticality. We derive the Dynamic Mean Field Theory of this model and find that the distinct states of activity are asynchrony and synchrony. We use our theory to characterize response to various types and strengths of optogenetic stimulation. Our model and theory predict that asynchronous, near-critical dynamics can have two qualitatively different responses to stimulation: one characterized by high sensitivity, discrete event responses, and high trial-to-trial variability, and another characterized by low sensitivity, continuous responses with characteristic frequencies, and low trial-to-trial variability. While both response types may be considered near-critical in model space, networks which are closest to criticality show a hybrid of these response effects.

SeminarNeuroscienceRecording

Timing errors and decision making

Fuat Balci
University of Manitoba
Nov 29, 2021

Error monitoring refers to the ability to monitor one's own task performance without explicit feedback. This ability is studied typically in two-alternative forced-choice (2AFC) paradigms. Recent research showed that humans can also keep track of the magnitude and direction of errors in different magnitude domains (e.g., numerosity, duration, length). Based on the evidence that suggests a shared mechanism for magnitude representations, we aimed to investigate whether metric error monitoring ability is commonly governed across different magnitude domains. Participants reproduced/estimated temporal, numerical, and spatial magnitudes after which they rated their confidence regarding first order task performance and judged the direction of their reproduction/estimation errors. Participants were also tested in a 2AFC perceptual decision task and provided confidence ratings regarding their decisions. Results showed that variability in reproductions/estimations and metric error monitoring ability, as measured by combining confidence and error direction judgements, were positively related across temporal, spatial, and numerical domains. Metacognitive sensitivity in these metric domains was also positively associated with each other but not with metacognitive sensitivity in the 2AFC perceptual decision task. In conclusion, the current findings point at a general metric error monitoring ability that is shared across different metric domains with limited generalizability to perceptual decision-making.

SeminarPsychology

Consistency of Face Identity Processing: Basic & Translational Research

Jeffrey Nador
University of Fribourg
Nov 17, 2021

Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 26, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

Rastermap: Extracting structure from high dimensional neural data

Carsen Stringer
HHMI, Janelia Research Campus
Oct 26, 2021

Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

SeminarPsychologyRecording

Differential working memory functioning

Anja Leue
University of Kiel, Germany
Jul 20, 2021

The integrated conflict monitoring theory of Botvinick introduced cognitive demand into conflict monitoring research. We investigated effects of individual differences of cognitive demand and another determinant of conflict monitoring entitled reinforcement sensitivity on conflict monitoring. We showed evidence of differential variability of conflict monitoring intensity using the electroencephalogram (EEG), functional magnet resonance imaging (fMRI) and behavioral data. Our data suggest that individual differences of anxiety and reasoning ability are differentially related to the recruitment of proactive and reactive cognitive control (cf. Braver). Based on previous findings, the team of the Leue-Lab investigated new psychometric data on conflict monitoring and proactive-reactive cognitive control. Moreover, data of the Leue-Lab suggest the relevance of individual differences of conflict monitoring for the context of deception. In this respect, we plan new studies highlighting individual differences of the functioning of the Anterior Cingulate Cortex (ACC). Disentangling the role of individual differences in working memory-related cognitive demand, mental effort, and reinforcement-related processes opens new insights for cognitive-motivational approaches of information processing (Passcode to rewatch: 0R8v&m59).

SeminarNeuroscience

Inclusive Basic Research

Dr Simone Badal and Dr Natasha Karp
University of the West Indies, Astra Zeneca
Jun 8, 2021

Methodology for understanding the basic phenomena of life can be done in vitro or in vivo, under tightly-controlled experimental conditions designed to limit variability. However stringent the protocol, these experiments do not occur in a cultural vacuum and they are often subject to the same societal biases as other research disciplines. Many researchers uphold the status quo of biased basic research by not questioning the characteristics of their experimental animals, or the people from whom their tissue samples were collected. This means that our fundamental understanding of life has been built on biased models. This session will explore the ways in which basic life sciences research can be biased and the implications of this. We will discuss practical ways to assess your research design and how to make sure it is representative.

SeminarPsychology

The Jena Voice Learning and Memory Test (JVLMT)

Romi Zäske
University of Jena
May 26, 2021

The ability to recognize someone’s voice spans a broad spectrum with phonagnosia on the low end and super recognition at the high end. Yet there is no standardized test to measure the individual ability to learn and recognize newly-learnt voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 20 min-test based on item response theory and applicable across different languages. The JVLMT consists of three phases in which participants are familiarized with eight speakers in two stages and then perform a three-alternative forced choice recognition task, using pseudo sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with different levels of difficulty. Test scores are based on 22 Rasch-conform items. Items were selected and validated in online studies based on 232 and 454 participants, respectively. Mean accuracy is 0.51 with an SD of .18. The JVLMT showed high and moderate correlations with convergent validation tests (Bangor Voice Matching Test; Glasgow Voice Memory Test) and a weak correlation with a discriminant validation test (Digit Span). Empirical (marginal) reliability is 0.66. Four participants with super recognition (at least 2 SDs above the mean) and 7 participants with phonagnosia (at least 2 SDs below the mean) were identified. The JVLMT is a promising screen too for voice recognition abilities in a scientific and neuropsychological context.

SeminarNeuroscience

Neural correlates of cognitive control across the adult lifespan

Cheryl Grady
May 26, 2021

Cognitive control involves the flexible allocation of mental resources during goal-directed behaviour and comprises three correlated but distinct domains—inhibition, task shifting, and working memory. Healthy ageing is characterised by reduced cognitive control. Professor Cheryl Grady and her team have been studying the influence of age differences in large-scale brain networks on the three control processes in a sample of adults from 20 to 86 years of age. In this webinar, Professor Cheryl Grady will describe three aspects of this work: 1) age-related dedifferentiation and reconfiguration of brain networks across the sub-domains 2) individual differences in the relation of task-related activity to age, structural integrity and task performance for each sub-domain 3) modulation of brain signal variability as a function of cognitive load and age during working memory. This research highlights the reduction in dynamic range of network activity that occurs with ageing and how this contributes to age differences in cognitive control. Cheryl Grady is a senior scientist at the Rotman Research Institute at Baycrest, and Professor in the departments of Psychiatry and Psychology at the University of Toronto. She held the Canada Research Chair in Neurocognitive Aging from 2005-2018 and was elected as a Fellow of the Royal Society of Canada in 2019. Her research uses MRI to determine the role of brain network connectivity in cognitive ageing.

SeminarNeuroscience

Psychological mechanisms and functions of 5-HT and SSRIs in potential therapeutic change: Lessons from the serotonergic modulation of action selection, learning, affect, and social cognition

Clark Roberts
University of Cambridge, Department of Psychology
May 25, 2021

Uncertainty regarding which psychological mechanisms are fundamental in mediating SSRI treatment outcomes and wide-ranging variability in their efficacy has raised more questions than it has solved. Since subjective mood states are an abstract scientific construct, only available through self-report in humans, and likely involving input from multiple top-down and bottom-up signals, it has been difficult to model at what level SSRIs interact with this process. Converging translational evidence indicates a role for serotonin in modulating context-dependent parameters of action selection, affect, and social cognition; and concurrently supporting learning mechanisms, which promote adaptability and behavioural flexibility. We examine the theoretical basis, ecological validity, and interaction of these constructs and how they may or may not exert a clinical benefit. Specifically, we bridge crucial gaps between disparate lines of research, particularly findings from animal models and human clinical trials, which often seem to present irreconcilable differences. In determining how SSRIs exert their effects, our approach examines the endogenous functions of 5-HT neurons, how 5-HT manipulations affect behaviour in different contexts, and how their therapeutic effects may be exerted in humans – which may illuminate issues of translational models, hierarchical mechanisms, idiographic variables, and social cognition.

SeminarNeuroscienceRecording

Neuronal variability and spatiotemporal dynamics in cortical network models

Chengcheng Huang
University of Pittsburgh
May 18, 2021

Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability. When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise.

SeminarNeuroscienceRecording

Variability, maintenance and learning in birdsong

Adrienne Fairhall
University of Washington
Mar 30, 2021

The songbird zebra finch is an exemplary model system in which to study trial-and-error learning, as the bird learns its single song gradually through the production of many noisy renditions. It is also a good system in which to study the maintenance of motor skills, as the adult bird actively maintains its song and retains some residual plasticity. Motor learning occurs through the association of timing within the song, represented by sparse firing in nucleus HVC, with motor output, driven by nucleus RA. Here we show through modeling that the small level of observed variability in HVC can result in a network which is more easily able to adapt to change, and is most robust to cell damage or death, than an unperturbed network. In collaboration with Carlos Lois’ lab, we also consider the effect of directly perturbing HVC through viral injection of toxins that affect the firing of projection neurons. Following these perturbations, the song is profoundly affected but is able to almost perfectly recover. We characterize the changes in song acoustics and syntax, and propose models for HVC architecture and plasticity that can account for some of the observed effects. Finally, we suggest a potential role for inputs from nucleus Uva in helping to control timing precision in HVC.

SeminarNeuroscience

Neural circuit parameter variability, robustness, and homeostasis

Astrid Prinz
Emory University
Mar 11, 2021

Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.

SeminarNeuroscience

How to combine brain stimulation with neuroimaging: "Concurrent tES-fMRI

Charlotte Stag, Lucia Li, Axel Thielscher, Zeinab Esmaeilpour, Danny Wang, Michael Nitsche, Til Ole Bergmann, ...
University of Oxford, University of Imperial College London, ...
Feb 3, 2021

Transcranial electrical stimulation (tES) techniques, including transcranial alternating and direct current stimulation (tACS and tDCS), are non-invasive brain stimulation technologies increasingly used for modulation of targeted neural and cognitive processes. Integration of tES with human functional magnetic resonance imaging (fMRI) provides a novel avenue in human brain mapping for investigating the neural mechanisms underlying tES. Advances in the field of tES-fMRI can be hampered by the methodological variability between studies that confounds comparability/replicability. To address the technical/methodological details and to propose a new framework for future research, the scientific international network of tES-fMRI (INTF) was founded with two main aims: • To foster scientific exchange between researchers for sharing ideas, exchanging experiences, and publishing consensus articles; • To implement the joint studies through a continuing dialogue with the institutes across the globe. The network organized three international scientific webinars, in which considerable heterogeneities of technical/methodological aspects in studies combining tES with fMRI were discussed along with strategies to help to bridge respective knowledge gaps, and distributes newsletters that are sent regularly to the network members from the Twitter and LinkedIn accounts.

SeminarNeuroscienceRecording

Cellular mechanisms behind stimulus evoked quenching of variability

Brent Doiron
University of Chicago
Jan 26, 2021

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

SeminarNeuroscienceRecording

The emergence and modulation of time in neural circuits and behavior

Luca Mazzucato
University of Oregon
Jan 21, 2021

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior and cortical activity. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times can be accelerated or slowed down via gain modulation, flexibly induced by neuromodulation or perturbations; and how gain modulation may control response timing in the visual cortex. Finally, we will present a new biologically plausible way to generate a reservoir of multiple timescales in cortical circuits.

SeminarNeuroscienceRecording

High precision coding in visual cortex

Carsen Stringer
Janelia
Jan 7, 2021

Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.

SeminarNeuroscienceRecording

The emergence and modulation of time in neural circuits and behavior

Luca Mazzucato
University of Oregon
Nov 24, 2020

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.

SeminarNeuroscienceRecording

Dimensions of variability in circuit models of cortex

Brent Doiron
The University of Chicago
Nov 15, 2020

Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.

SeminarNeuroscience

Rapid State Changes Account for Apparent Brain and Behavior Variability

David McCormick
University of Oregon
Sep 16, 2020

Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.

SeminarPhysics of Life

Robotic mapping and generative modelling of cytokine response

Paul François
McGill University – Montréal QC – Canada
Jul 28, 2020

We have developed a robotic platform allowing us to monitor cytokines dynamics (including IL-2, IFN-g, TNF, IL-6) of immune cells in vitro, with unprecedented resolution. To understand the complex emerging dynamics, we use interpretable machine learning techniques to build a generative model of cytokine response. We discover that, surprisingly, immune activity is encoded into one global parameter, encoding ligand antigenic properties and to a less extent ligand quantity. Based on this we build a simple interpretable model which can fully explain the broad variability of cytokines dynamics. We validate our approach using different lines of cells and different ligands. Two processes are identified, connected to timing and intensity of cytokine response, which we successfully modulate using drugs or by changing conditions such as initial T cell numbers. Our work reveals a simple "cytokine code", which can be used to better understand immune response in different contexts including immunotherapy. More generally, it reveals how robotic platforms and machine learning can be leveraged to build and validate systems biology models.

SeminarNeuroscience

Cortical population coding of consumption decisions

Donald B. Katz
Brandeis University
Jun 29, 2020

The moment that a tasty substance enters an animal’s mouth, the clock starts ticking. Taste information transduced on the tongue signals whether a potential food will nourish or poison, and the animal must therefore use this information quickly if it is to decide whether the food should be swallowed or expelled. The system tasked with computing this important decision is rife with cross-talk and feedback—circuitry that all but ensures dynamics and between-neuron coupling in neural responses to tastes. In fact, cortical taste responses, rather than simply reporting individual taste identities, do contain characterizable dynamics: taste-driven firing first reflects the substance’s presence on the tongue, and then broadly codes taste quality, and then shifts again to correlate with the taste’s current palatability—the basis of consumption decisions—all across the 1-1.5 seconds after taste administration. Ensemble analyses reveal the onset of palatability-related firing to be a sudden, nonlinear transition happening in many neurons simultaneously, such that it can be reliably detected in single trials. This transition faithfully predicts both the nature and timing of consumption behaviours, despite the huge trial-to-trial variability in both; furthermore, perturbations of this transition interfere with production of the behaviours. These results demonstrate the specific importance of ensemble dynamics in the generation of behaviour, and reveal the taste system to be akin to a range of other integrated sensorimotor systems.

SeminarNeuroscienceRecording

Cortical-like dynamics in recurrent circuits optimized for sampling-based probabilistic inference

Máté Lengyel
University of Cambridge
Jun 7, 2020

Sensory cortices display a suite of ubiquitous dynamical features, such as ongoing noise variability, transient overshoots, and oscillations, that have so far escaped a common, principled theoretical account. We developed a unifying model for these phenomena by training a recurrent excitatory-inhibitory neural circuit model of a visual cortical hypercolumn to perform sampling-based probabilistic inference. The optimized network displayed several key biological properties, including divisive normalization, as well as stimulus-modulated noise variability, inhibition-dominated transients at stimulus onset, and strong gamma oscillations. These dynamical features had distinct functional roles in speeding up inferences and made predictions that we confirmed in novel analyses of awake monkey recordings. Our results suggest that the basic motifs of cortical dynamics emerge as a consequence of the efficient implementation of the same computational function — fast sampling-based inference — and predict further properties of these motifs that can be tested in future experiments

SeminarNeuroscience

High precision coding in visual cortex

Carsen Stringer
HHMI Janelia Research Campus
Jun 3, 2020

Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.

ePoster

Response variability can accelerate learning in feedforward-recurrent networks

Sigrid Trägenap, Matthias Kaschube

Bernstein Conference 2024

ePoster

Variability in Self-Organizing Networks of Neurons: Between Chance and Design

Samora Okujeni, Ulrich Egert

Bernstein Conference 2024

ePoster

Affine models explain tuning-dependent correlated variability within and between V1 and V2

COSYNE 2022

ePoster

A brain-computer interface in prefrontal cortex that suppresses neural variability

COSYNE 2022

ePoster

Diverse covariates modulate neural variability: a widespread (sub)cortical phenomenon

COSYNE 2022

ePoster

Experience early in auditory conditioning impacts across-animal variability in neural tuning

COSYNE 2022

ePoster

Natural scene expectation shapes the structure of trial to trial variability in mid-level visual cortex

COSYNE 2022

ePoster

Natural scene expectation shapes the structure of trial to trial variability in mid-level visual cortex

COSYNE 2022

ePoster

Relating Divisive Normalization to Modulation of Correlated Variability in Primary Visual Cortex

COSYNE 2022

ePoster

Relating Divisive Normalization to Modulation of Correlated Variability in Primary Visual Cortex

COSYNE 2022

ePoster

Sensory specific modulation of neural variability facilitates perceptual inference

COSYNE 2022

ePoster

Sensory specific modulation of neural variability facilitates perceptual inference

COSYNE 2022

ePoster

Single cell measures of tuning to imagined position during replay show preserved spatial tuning but quenched neural variability in place cells.

COSYNE 2022

ePoster

Single cell measures of tuning to imagined position during replay show preserved spatial tuning but quenched neural variability in place cells.

COSYNE 2022

ePoster

Accounting for visual cortex variability with distributed neural activity states

Anna Li, Ziyu Lu, J. Nathan Kutz, Eric Shea-Brown, Nicholas Steinmetz

COSYNE 2023

ePoster

Conductance Based Integrate and Fire Model with Correlated Inputs Captures Neural Variability

Logan Becker, Thibaud Taillefumier, Nicholas Priebe, Eyal Seidemannn, Baowang Li

COSYNE 2023

ePoster

Decoding momentary gain variability from neuronal populations

Corey M Ziemba, Zoe Boundy-Singer, Robbe Goris

COSYNE 2023

ePoster

Inter-animal variability in learning depends on transfer of pre-task experience via the hippocampus

Cristofer Holobetz, Zhuonan Yang, Greer Williams, Shrabasti Jana, David Kastner

COSYNE 2023

ePoster

Optimal control under uncertainty predicts variability in human navigation behavior

Fabian Kessler, Julia Frankenstein, Constantin Rothkopf

COSYNE 2023

ePoster

Balanced two-photon holographic bidirectional optogenetics defines the mechanism for stimulus quenching of neural variability

Kevin Sit, Brent Doiron, Chengcheng Huang, Hillel Adesnik

COSYNE 2025

ePoster

Inter-individual Variability in Primate Inferior Temporal Cortex Representations: Insights from Macaque Neural Responses and Artificial Neural Networks

Kohitij Kar, James DiCarlo

COSYNE 2025

ePoster

Neural sampling in a balanced spiking network with internally generated variability

Xinruo Yang, Wenhao Zhang, Brent Doiron

COSYNE 2025

ePoster

The Role of Neural Variability in Supporting Few-shot Generalization in Cortex

Praveen Venkatesh, Jiaqi Shang, Corbett Bennett, Sam Gale, Greggory Heller, Tamina Ramirez, Severine Durand, Eric Shea-Brown, Shawn Olsen, Stefan Mihalas

COSYNE 2025

ePoster

Age-related changes in neural variability in a decision-making task

Fenying Zang, Anup Khanal, Sonja Förster, International Brain Laboratory, Anne K Churchland, Anne E Urai

FENS Forum 2024

ePoster

Basal ganglia pathways for regulating motor skill variability

Sophie Elvig, Oluwatomiwa Oladunni, Steffen Wolff

FENS Forum 2024

ePoster

Controlling morpho-electrophysiological variability of neurons with detailed biophysical models

Alexis Arnaudon, Maria Reva, Mickael Zbili, Henry Markarm, Werner Van Geit, Lida Kanari

FENS Forum 2024

ePoster

Exploring the variability and functional implications of axon initial segment morphology in hippocampal neurons

Christian Thome, Nikolas Stevens, Juri Monath, Andreas Draguhn, Maren Engelhardt*, Martin Both*

FENS Forum 2024

ePoster

The influence of pulse shape and current direction of TMS on test-retest reliability and variability of single pulse TMS protocols

Desmond Agboada, Roman Rethwilm, Wolfgang Seiberl, Wolfgang Mack

FENS Forum 2024

ePoster

Interindividual variability of neuronal connectivity and function in zebrafish olfactory bulb

Ruth Eneida Montano Crespo, Alexandra Graff Meyer, Tomáš Gancarčik, Nila R. Mönig, Michal Januszewski, Bo Hu, Nesibe Z. Temiz, Rainer W. Friedrich

FENS Forum 2024

ePoster

Mapping individual variability in the pituitary gland: A new volumetric atlas

Fabien Schneider, Manel Merabet, Jérôme Redouté, Nicolas Costes, Claire Boutet, Germain Natacha, Bogdan Galusca

FENS Forum 2024

ePoster

No two mice alike: Leveraging inter-individual variability in threat conditioning of inbred mice to model trait anxiety

Irina Kovlyagina, Anna Wierczeiko, Hristo Todorov, Eric Jacobi, Margarita Tevosian, Jakob von Engelhardt, Susanne Gerber, Beat Lutz

FENS Forum 2024

ePoster

Reaction time variability in a delayed memory saccade task replicated by a recurrent neural network model

Roger Herikstad, Camilo Libedinsky

FENS Forum 2024

ePoster

Spatial and topological variability of dendritic morphology in the motion detection pathway of Drosophila melanogaster

Nikolas Drummond, Alexander Borst

FENS Forum 2024

ePoster

The variability of spectro-laminar beta rhythm patterns in macaque motor cortex reflects task and behavioral parameters

Laura López-Galdo, Simon Nougaret, Demian Battaglia, Bjørg Elisabeth Kilavik

FENS Forum 2024