Latest

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 10, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

Generation and use of internal models of the world to guide flexible behavior

Antonio Fernandez-Ruiz
Cornell University, USA
Oct 27, 2025
SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 2, 2025
SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
Jul 9, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

“Brain theory, what is it or what should it be?”

Prof. Guenther Palm
University of Ulm
Jun 27, 2025

n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.

SeminarNeuroscience

“Development and application of gaze control models for active perception”

Prof. Bert Shi
Professor of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST)
Jun 12, 2025

Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.

SeminarNeuroscience

Developmental and evolutionary perspectives on thalamic function

Dr. Bruno Averbeck
National Institute of Mental Health, Maryland, USA
Jun 11, 2025

Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
May 14, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia

Alban Elias Voppel
McGill University
May 1, 2025
SeminarNeuroscience

Unlocking the Secrets of Microglia in Neurodegenerative diseases: Mechanisms of resilience to AD pathologies

Ghazaleh Eskandari-Sedighi
UC Irvince
May 1, 2025
SeminarNeuroscience

Impact of High Fat Diet on Central Cardiac Circuits: When The Wanderer is Lost

Carie Boychuk
University of Missouri
Mar 20, 2025

Cardiac vagal motor drive originates in the brainstem's cardiac vagal motor neurons (CVNs). Despite well-established cardioinhibitory functions in health, our understanding of CVNs in disease is limited. There is a clear connection of cardiovascular regulation with metabolic and energy expenditure systems. Using high fat diet as a model, this talk will explore how metabolic dysfunction impacts the regulation of cardiac tissue through robust inhibition of CVNs. Specifically, it will present an often overlooked modality of inhibition, tonic gamma-aminobuytric acid (GABA) A-type neurotransmission using an array of techniques from single cell patch clamp electrophysiology to transgenic in vivo whole animal physiology. It also will highlight a unique interaction with the delta isoform of protein kinase C to facilitate GABA A-type receptor expression.

SeminarNeuroscienceRecording

Rethinking Attention: Dynamic Prioritization

Sarah Shomstein
George Washington University
Jan 7, 2025

Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 29, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 29, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarNeuroscience

Brain-Wide Compositionality and Learning Dynamics in Biological Agents

Kanaka Rajan
Harvard Medical School
Nov 13, 2024

Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.

SeminarNeuroscience

Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus

Khamasi Mehdi
Centre National de la Recherche Scientifique / Sorbonne University
Nov 8, 2024
SeminarNeuroscienceRecording

On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery

André Curtis Trudel
University of Cincinnati
Oct 10, 2024

Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.

SeminarNeuroscience

Top-down models of learning and decision-making in BG

Rafal Bogacz & Michael Frank
University of Oxford Resp. Brown University
Sep 27, 2024
SeminarNeuroscience

Transcranial magnetic stimulation in animal models: Using small coils in small brains to investigate biological and therapeutic mechanisms

Jennifer Rodger
University of Western Australia, Perth
Jun 20, 2024
SeminarNeuroscience

Neural mechanisms governing the learning and execution of avoidance behavior

Mario Penzo
National Institute of Mental Health, Bethesda, USA
Jun 19, 2024

The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.

SeminarNeuroscience

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Nicholas Blauch
Jun 7, 2024

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Updating our models of the basal ganglia using advances in neuroanatomy and computational modeling

Mac Shine
University of Sydney
May 29, 2024
SeminarNeuroscience

Investigating dynamiCa++l mechanisms underlying cortical development and disease

Georgia Panagiotakos
Icahn School of Medicine at Mount Sinai
May 8, 2024
SeminarNeuroscience

Roles of inhibition in stabilizing and shaping the response of cortical networks

Nicolas Brunel
Duke University
Apr 5, 2024

Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.

SeminarNeuroscience

Investigating activity-dependent processes during cortical neuronal assembly in development and disease

Simona Lodato
Humanitas University
Mar 20, 2024
SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 6, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarNeuroscience

Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)

Mehdi Azabou
Feb 22, 2024

Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory

James Whittington
Stanford University / University of Oxford
Feb 14, 2024

Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM

SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 5, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

ePosterNeuroscience

Building internal models during periods of rest and sleep

Helen Barron

Bernstein Conference 2024

ePosterNeuroscience

Computing mutual-information rates by maximum-entropy-inspired models

Tobias Kühn, Gabriel Mahuas, Ulisse Ferrari

Bernstein Conference 2024

ePosterNeuroscience

Conditions for sequence replay in recurrent network models of CA3

Gaspar Cano, Richard Kempter

Bernstein Conference 2024

ePosterNeuroscience

Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Hodgkin Huxley Models

Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra Khadim, Jakob Macke, Philipp Hennig, Philipp Berens

Bernstein Conference 2024

ePosterNeuroscience

Effects of global inhibition on models of neural dynamics

Antonio de Candia, Silvia Scarpetta, Ludovico Minati

Bernstein Conference 2024

ePosterNeuroscience

Experiment-based Models to Study Local Learning Rules for Spiking Neural Networks

Giulia Amos, Maria Saramago, Alexandre Suter, Tim Schmid, Jens Duru, Sean Weaver, Benedikt Maurer, Stephan Ihle, Janos Vörös, Katarina Vulić

Bernstein Conference 2024

ePosterNeuroscience

Evolutionary algorithms support recurrent plasticity in spiking neural network models of neocortical task learning

Ivyer Qu, Huaze Liu, Jiayue Li, Yuqing Zhu

Bernstein Conference 2024

ePosterNeuroscience

Excitatory and inhibitory neurons exhibit distinct roles for task learning, temporal scaling, and working memory in recurrent spiking neural network models of neocortex.

Ulaş Ayyılmaz, Antara Krishnan, Yuqing Zhu

Bernstein Conference 2024

ePosterNeuroscience

Integrating activity measurements into connectome-constrained and task-optimized models

Linda Ulmer, Janne Lappalainen, Srinivas Turaga, Jakob Macke

Bernstein Conference 2024

ePosterNeuroscience

Inhibition-controlled Hebbian learning unifies phenomenological and normative models of plasticity

Julian Rossbroich, Friedemann Zenke

Bernstein Conference 2024

ePosterNeuroscience

Investigating the role of recurrent connectivity in connectome-constrained and task-optimized models of the fruit fly’s motion pathway

Zinovia Stefanidi, Janne Lappalainen, Srinivas Turaga, Jakob Macke

Bernstein Conference 2024

ePosterNeuroscience

Navigating through the Latent Spaces in Generative Models

Antoniya Boyanova, Tahmineh Koosha, Marco Rothermel, Hamidreza Jamalabadi

Bernstein Conference 2024

ePosterNeuroscience

Optimal control of oscillations and synchrony in nonlinear models of neural population dynamics

Lena Salfenmoser, Klaus Obermayer

Bernstein Conference 2024

ePosterNeuroscience

Reverse engineering recurrent network models reveals mechanisms for location memory

Ian Hawes, Matt Nolan

Bernstein Conference 2024

ePosterNeuroscience

Sequence learning under biophysical constraints: a re-evaluation of prominent models

Barna Zajzon, Younes Bouhadjar, Tom Tetzlaff, Renato Duarte, Abigail Morrison

Bernstein Conference 2024

ePosterNeuroscience

Threshold-Linear Networks as a Ground-Zero Theory for Spiking Models

Stefano Masserini, Richard Kempter

Bernstein Conference 2024

ePosterNeuroscience

Unified C. elegans Neural Activity and Connectivity Datasets for Building Foundation Models of a Small Nervous System

Quilee Simeon, Anshul Kashyap, Konrad Kording, Ed Boyden

Bernstein Conference 2024

ePosterNeuroscience

Affine models explain tuning-dependent correlated variability within and between V1 and V2

Ji Xia,Ken Miller

COSYNE 2022

ePosterNeuroscience

Bayesian active learning for latent variable models of decision-making

Aditi Jha,Zoe C. Ashwood,Jonathan Pillow

COSYNE 2022

ePosterNeuroscience

Do better object recognition models improve the generalization gap in neural predictivity?

Yifei Ren,Pouya Bashivan

COSYNE 2022

ePosterNeuroscience

Disentangling neural dynamics with fluctuating hidden Markov models

Sacha Sokoloski,Ruben Coen-Cagli

COSYNE 2022

ePosterNeuroscience

Fitting recurrent spiking network models to study the interaction between cortical areas

Christos Sourmpis,Anastasiia Oryshchuk,Sylvain Crochet,Wulfram Gerstner,Carl Petersen,Guillaume Bellec

COSYNE 2022

ePosterNeuroscience

Fitting recurrent spiking network models to study the interaction between cortical areas

Christos Sourmpis,Anastasiia Oryshchuk,Sylvain Crochet,Wulfram Gerstner,Carl Petersen,Guillaume Bellec

COSYNE 2022

ePosterNeuroscience

Hypothesis-neutral models of higher-order visual cortex reveal strong semantic selectivity

Meenakshi Khosla,Leila Wehbe

COSYNE 2022

ePosterNeuroscience

Identifying and adaptively perturbing compact deep neural network models of visual cortex

Benjamin Cowley,Patricia Stan,Matthew Smith,Jonathan Pillow

COSYNE 2022

ePosterNeuroscience

Identifying and adaptively perturbing compact deep neural network models of visual cortex

Benjamin Cowley,Patricia Stan,Matthew Smith,Jonathan Pillow

COSYNE 2022

ePosterNeuroscience

Hypothesis-neutral models of higher-order visual cortex reveal strong semantic selectivity

Meenakshi Khosla,Leila Wehbe

COSYNE 2022

ePosterNeuroscience

Learning accurate path integration in ring attractor models of the head direction system

Pantelis Vafidis,David Owald,Tiziano D'Albis,Richard Kempter

COSYNE 2022

ePosterNeuroscience

Learning accurate path integration in ring attractor models of the head direction system

Pantelis Vafidis,David Owald,Tiziano D'Albis,Richard Kempter

COSYNE 2022

ePosterNeuroscience

Many, but not all, deep neural network audio models predict auditory cortex responses and exhibit hierarchical layer-region correspondence

Greta Tuckute,Jenelle Feather,Dana Boebinger,Josh McDermott

COSYNE 2022

ePosterNeuroscience

Many, but not all, deep neural network audio models predict auditory cortex responses and exhibit hierarchical layer-region correspondence

Greta Tuckute,Jenelle Feather,Dana Boebinger,Josh McDermott

COSYNE 2022

ePosterNeuroscience

Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types

Ishani Ganguly,Matthias Christenson,Rudy Behnia

COSYNE 2022

ePosterNeuroscience

Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types

Ishani Ganguly,Matthias Christenson,Rudy Behnia

COSYNE 2022

ePosterNeuroscience

Reduced stochastic models reveal the mechanisms underlying drifting cell assemblies

Sven Goedeke,Christian Klos,Felipe Yaroslav Kalle Kossio,Raoul Martin Memmesheimer

COSYNE 2022

ePosterNeuroscience

Reduced stochastic models reveal the mechanisms underlying drifting cell assemblies

Sven Goedeke,Christian Klos,Felipe Yaroslav Kalle Kossio,Raoul Martin Memmesheimer

COSYNE 2022

ePosterNeuroscience

Time-warped state space models for distinguishing movement type and vigor

Julia Costacurta,Alex Williams,Blue Sheffer,Caleb Weinreb,Winthrop Gillis,Jeffrey Markowitz,Sandeep Robert Datta,Scott Linderman

COSYNE 2022

ePosterNeuroscience

Time-warped state space models for distinguishing movement type and vigor

Julia Costacurta,Alex Williams,Blue Sheffer,Caleb Weinreb,Winthrop Gillis,Jeffrey Markowitz,Sandeep Robert Datta,Scott Linderman

COSYNE 2022

ePosterNeuroscience

Towards using small topologically constrained networks in-vitro in combination with in-silico models

Stephan Ihle,Sean Weaver,Katarina Vulić,János Vörös,Sophie Girardin,Thomas Felder,Julian Hengsteler,Jens Duru,Csaba Forró,Tobias Ruff,Benedikt Maurer

COSYNE 2022

ePosterNeuroscience

Towards using small topologically constrained networks in-vitro in combination with in-silico models

Stephan Ihle,Sean Weaver,Katarina Vulić,János Vörös,Sophie Girardin,Thomas Felder,Julian Hengsteler,Jens Duru,Csaba Forró,Tobias Ruff,Benedikt Maurer

COSYNE 2022

ePosterNeuroscience

Building mechanistic models of neural computations with simulation-based machine learning

Jakob Macke

Bernstein Conference 2024

models coverage

90 items

Seminar50
ePoster40
Domain spotlight

Explore how models research is advancing inside Neuro.

Visit domain