← Back

Credit Assignment

Topic spotlight
TopicWorld Wide

credit assignment

Discover seminars, jobs, and research tagged with credit assignment across World Wide.
18 curated items10 ePosters7 Seminars1 Position
Updated 1 day ago
18 items · credit assignment
18 results
Position

Dr Jonathan Tang

Seattle Children’s Research Institute
Seattle, USA
Dec 5, 2025

This position will focus on the neural mechanisms underlying action learning in mice. Scientifically the project aims to understand the neural circuits, activities and behavioral dynamics behind how animals learn what actions to take for reward. Dopaminergic systems and associated circuitries will be the focus of investigation. This lab integrates wireless inertial sensors, closed loop algorithms, optogenetics and neural recording to pursue this goal.

SeminarNeuroscienceRecording

Assigning credit through the "other” connectome

Eric Shea-Brown
University of Washington, Seattle
Apr 18, 2023

Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 8, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscienceRecording

Online Training of Spiking Recurrent Neural Networks​ With Memristive Synapses

Yigit Demirag
Institute of Neuroinformatics
Jul 5, 2022

Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.

SeminarNeuroscienceRecording

Credit Assignment in Neural Networks through Deep Feedback Control

Alexander Meulemans
Institute of Neuroinformatics, University of Zürich and ETH Zürich
Sep 29, 2021

The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motives, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.

SeminarNeuroscienceRecording

Back-propagation in spiking neural networks

Timothee Masquelier
Centre national de la recherche scientifique, CNRS | Toulouse
Aug 31, 2020

Back-propagation is a powerful supervised learning algorithm in artificial neural networks, because it solves the credit assignment problem (essentially: what should the hidden layers do?). This algorithm has led to the deep learning revolution. But unfortunately, back-propagation cannot be used directly in spiking neural networks (SNN). Indeed, it requires differentiable activation functions, whereas spikes are all-or-none events which cause discontinuities. Here we present two strategies to overcome this problem. The first one is to use a so-called 'surrogate gradient', that is to approximate the derivative of the threshold function with the derivative of a sigmoid. We will present some applications of this method for time series processing (audio, internet traffic, EEG). The second one concerns a specific class of SNNs, which process static inputs using latency coding with at most one spike per neuron. Using approximations, we derived a latency-based back-propagation rule for this sort of networks, called S4NN, and applied it to image classification.

ePoster

Using Dynamical Systems Theory to Improve Temporal Credit Assignment in Spiking Neural Networks

Rainer Engelken, L.F. Abbott

Bernstein Conference 2024

ePoster

Principled credit assignment with strong feedback through Deep Feedback Control

COSYNE 2022

ePoster

Principled credit assignment with strong feedback through Deep Feedback Control

COSYNE 2022

ePoster

Reorganizing cortical learning: a cholinergic adaptive credit assignment model

COSYNE 2022

ePoster

Reorganizing cortical learning: a cholinergic adaptive credit assignment model

COSYNE 2022

ePoster

Excitatory-inhibitory cortical feedback enables efficient hierarchical credit assignment

Will Greedy, Heng Wei Zhu, Joseph Pemberton, Jack Mellor, Rui Ponte Costa

COSYNE 2023

ePoster

Biologically plausible credit assignment via neuronal frequency multiplexing

Li Ji-An, Marcus Benna

COSYNE 2025

ePoster

Can BTSP mediate credit assignment in the hippocampus?

Ian Cone, Claudia Clopath, Rui Ponte Costa

COSYNE 2025

ePoster

Dendritic target propagation: a biology-constrained algorithm for credit assignment in multilayer recurrent E/I networks

Alessandro Galloni, Aaron Milstein

COSYNE 2025

ePoster

The neuronal trace of temporal credit assignment in premotor cortex

Brice de la Crompe, Megan Schneck, Hao Zhu, Julian Ammer, Hamed Shabani, Joschka Boedecker, Christian Leibold, Ilka Diester

FENS Forum 2024