ePoster

Dendritic target propagation: a biology-constrained algorithm for credit assignment in multilayer recurrent E/I networks

Alessandro Galloni, Aaron Milstein
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Alessandro Galloni, Aaron Milstein

Abstract

Correlative Hebbian learning rules are broadly supported by experimental evidence, but do not prescribe a mechanism to coordinate plasticity across multiple circuit layers during target-directed learning. Recent experimental and theoretical work suggests that top-down input to the apical dendrites of pyramidal cells may provide a solution to this deep credit assignment problem. Experiments have shown that dendritic-spike-dependent plasticity is promoted by top-down excitatory inputs carrying information about learning targets, and it is suppressed by lateral feedback inhibition. However, existing biology-inspired approaches to deep learning do not include separate excitatory and inhibitory cell types with sign-constrained synapses (Dale’s Law), challenging their biological plausibility. Another open question is how biologically realistic learning mechanisms in top-down synapses can facilitate error gradient propagation without imposing unrealistic exact symmetry with bottom-up synapses. We propose a method to propagate approximate gradients in multilayer recurrent E/I networks that contain distinct soma- and dendrite-targeting inhibitory interneurons. In this model, neurons respond continuously to a combination of sensory- and error-related signals. Soma-targeting interneurons perform gain normalization, enforce sparsity, and mediate stimulus feature subtraction, while dendrite-targeting interneurons enable local error computation by learning to predict and cancel expected top-down signals. We show that local Hebbian learning rules applied to both dendritic interneurons and top-down feedback connections enable apical dendrites to encode a close approximation of the error gradient. This network achieves performance comparable to the Backpropagation algorithm on two supervised nonlinear classification tasks (2D spirals and MNIST handwritten characters), with weight updates aligning closely to the true gradient direction. Furthermore, we outline the conditions under which the Hebbian update enables top-down weights to approach the transpose of forward weights. This dendritic microcircuit architecture supports gradient-based learning while adhering to strict biological constraints and realistic cell types, enabling direct experimental predictions regarding interneuron representations in the neocortex and hippocampus.

Unique ID: cosyne-25/dendritic-target-propagation-biology-constrained-723eaa24