ePoster

Sparse neural engagement in connectome-based reservoir computing networks

James McAllister, John Wade, Conor Houghton, Cian O'Donell
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

James McAllister, John Wade, Conor Houghton, Cian O'Donell

Abstract

Reservoir computing is a machine learning framework in which only the output layer is trained to perform a task; the rest – composed of an input layer and recurrent neural network – has fixed connections. The benefit of restricting training to the output layer is that it does not require the computationally expensive and biologically-implausible backpropagation typically used to train deep-learning networks. However, while reservoirs can do tasks well, their performance is unreliable; depending on the random initialisation of fixed weights within the network, their capabilities vary wildly, since no learning occurs within the reservoir itself. Finding a good initial network is therefore important. The choice of size, spectral radius, small-worldness, modularity, and sparsity affect performance, but the network connectivity's role remains an open question. In particular, it is unclear if topologies can be initialised for task-specific or task-generic contexts. In a non-reservoir setting, recurrent neural networks trained on multiple tasks demonstrate functionally-specific neuron clusters. Inspired by this, we asked if synapse-resolution connectome-based reservoir networks also demonstrate functional specificity or generality at the neural level. We used the larval Drosophila melanogaster connectome, which exhibits a hierarchical modular structure according to type, class, and function. We built nine connectome-based reservoirs from subnetworks we discovered in the drosophila larva connectome via a hierarchical stochastic block model. We compared neural specificity metrics between connectome-based and equivalent random networks across tasks in memory, decision-making, and time-series prediction. Connectome-based reservoirs contained smaller subsets of highly task-selective neurons, while random networks exhibited more widely involved and “general” groups. These findings indicate that biologically-inspired connectivity can enable sparsely selective and compact artificial neural networks, which may reduce energy consumption and improve continual learning. Neurobiologically, these results suggest that sparse neural selectivity observed experimentally is not necessary for good circuit performance, but derives from detailed brain connectivity.

Unique ID: cosyne-25/sparse-neural-engagement-connectome-based-fce9cd6e