ePoster

Biological-plausible learning with a two compartment neuron model in recurrent neural networks

Timo Oess, Daniel Schmid, Heiko Neumann
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Timo Oess, Daniel Schmid, Heiko Neumann

Abstract

Artificial recurrent neural networks (RNNs) are difficult to train due to their tendency towards instability, and common training algorithms that tame such networks are not biologically plausible, i.e., back-propagation through time (BPTT). Node perturbation learning in combination with local Hebbian weight updates has been shown to approximate BPTT [1] and, thus, could achieve similar performance while being biologically plausible. However, where these perturbations could arise from and how they are utilized within neurons remains obscure. For years, the neuroscientific community has known about the segregation of inputs of pyramidal neurons [2]. That is, they integrate their feedforward input at the basal region and contextual feedback information at the distal apical part of the neuron. This separation of inputs allows the neuron to take into account separate streams of information in parallel. Our proposed recurrent neural network model takes advantage of such a segregation of input streams by modulating the basal inputs to the neuron’s membrane potential by signals received at apical dendrites. In the beginning of the learning phase, these signals are random perturbations (i.e., pure node-perturbation learning). Over the course of the learning process, these signals are replaced by feedback signals from neurons in deeper layers as these neurons become more certain about their activation, i.e., higher overall energy content. The basic units in the model are leaky-integrator neurons with two compartments of integration. The basal compartment of the neuron receives recurrent inputs from other neurons in the network in addition to external inputs, e.g., indicating targets. The apical compartment integrates the received inputs from random perturbation and deeper layers and utilizes them to modulate the basal membrane potential. A neo-Hebbian update rule [3] of recurrent weights includes a correlation term of pre- and postsynaptic potentials, as well as a trace of the neuron’s recent activity. This trace together with a third factor based on reward prediction error enables recurrent weight updates in the approximated direction of steepest gradient. We demonstrate the learning ability of the network in various tasks and highlight the merit of modulatory apical signals for node perturbation learning.

Unique ID: bernstein-24/biological-plausible-learning-with-49d8d8d2