ePoster

Brain-like learning with exponentiated gradients

Jonathan Cornford, Roman Pogodin, Arna Ghosh, Kaiwen Sheng, Brendan Bicknell, Oliver Codol, Beverly Clark, Guillaume Lajoie, Blake Richards
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Jonathan Cornford, Roman Pogodin, Arna Ghosh, Kaiwen Sheng, Brendan Bicknell, Oliver Codol, Beverly Clark, Guillaume Lajoie, Blake Richards

Abstract

Computational neuroscience increasingly leverages gradient descent (GD) for training artificial neural network (ANN) models of the brain. GD’s advantage is that it is very effective at learning difficult tasks. This is important for the role of computational models as the interface between theory and empirical data (Levenstein et al., 2023), since models that can learn realistic tasks from realistic, high-dimensional inputs can be more directly compared to experimental data (Doerig et al., 2023). However, GD produces ANNs that are poor phenomenological fits to biology, making them less relevant as models of the brain. Specifically, GD violates basic physiology such as Dale’s law by allowing synapses to change from excitatory to inhibitory, and leads to synaptic weights that are not log-normally distributed, contradicting experimental data. Here we present an alternative learning algorithm called exponentiated gradient (EG), which was first proposed by Kivinen and Warmuth in 1997. We show EG respects Dale’s law and maintains log-normal weights, while critically being as powerful as GD for learning. Moreover, we also show that in biologically relevant settings EG outperforms GD, including learning from sparsely relevant signals and dealing with synaptic pruning. Altogether, our results show that EG is a superior learning algorithm for modelling the brain with ANNs.

Unique ID: cosyne-25/brain-like-learning-with-exponentiated-ff7d7c75