ePoster

Neurons learn by predicting their synaptic inputs

Thiago Burghi, Timothy O'Leary, Rodolphe Sepulchre
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Thiago Burghi, Timothy O'Leary, Rodolphe Sepulchre

Abstract

Contrastive Hebbian Learning (CHL) has recently emerged as a new biologically plausible paradigm to explain the mechanisms of plasticity in the brain. In classical CHL, developed for artificial networks, neurons learn in a two-step procedure where their free response to a given input is contrasted to their clamped response given the same input and an imposed target output. This two-step procedure, which is unlikely to be implementable in biological systems, has recently been made more plausible by exploiting the fundamental idea that neurons can learn by predicting their own future activity [1]. In particular, neural predictions of their free states have been empirically shown to allow for a single phase of CHL where network outputs are clamped to their targets midway through the phase. Despite important advancements, the plausibility of CHL remains in question. Key problems are the facts that current formulations require a free phase predictor to be trained separately; that such algorithms require network dynamics to reach steady-state before learning occurs; and that CHL formulations are not usually suited to spiking networks. This work proposes an entirely new biophysical formulation of CHL using mechanistic neural models that takes first steps towards addressing the plausibility of CHL in spiking neural networks. In our formulation of biophysical CHL, instead of predicting future activity (membrane potential), neurons learn by predicting their present synaptic input currents. For that purpose, we introduce a learning rule for spiking models that is based on real-time (i.e. continual) synaptic input prediction. This is achieved by contrasting predicted and true synaptic currents in a manner reminiscent of that of adaptive observers from control theory [2]. The resulting learning rule is biophysically plausible: a neuron only uses information about its synaptic inputs, which are mathematically distinct from the activity (voltage) of presynaptic neurons. We demonstrate that this rule eliminates the need for a previously trained predictor, and that it endows a biophysical model with the capability to learn the intrinsic parameters (conductances) of a presynaptic neuron. As a consequence, the learning rule teaches a postsynaptic neuron to fire in synchrony with its presynaptic partner based solely on the synaptic connection between them.

Unique ID: bernstein-24/neurons-learn-predicting-their-synaptic-3c1a3bad