Resources
Authors & Affiliations
Victor Geadah,Giancarlo Kerg,Stefan Horoi,Guy Wolf,Guillaume Lajoie
Abstract
Spike frequency adaptation (SFA) is a well studied physiological mechanism with established computational properties at the single-neuron level, including noise mitigating effects based on efficient coding principles. Network models with adaptive neurons have revealed advantages including modulation of total activity, supporting Bayesian inference, and allowing computations over distributed timescales. Such efforts are bottom-up, modeling adaptive mechanisms from physiology and analyzing their effects. How top-down environmental and functional pressures influence the specificity of adaptation remains largely unexplored. In this work, we use deep learning to uncover optimal adaptation strategies from scratch, in recurrent neural networks (RNNs) performing perceptual tasks. In our RNN, each neuron’s activation function (AF) is taken from a parametrized family to allow modulation mimicking SFA. An additional RNN, the adaptation controller, is trained end-to-end to control an AF in real time, based on pre-activation inputs to a neuron. Crucially, each neuron in the network operates with a private copy of this controller, conceptually similar to genetically encoded SFA mechanisms. When trained on temporal perception tasks (sequential MNIST/CIFAR10), our network of adaptive recurrent units (ARU) shows much improved robustness to noise and changes in input statistics. Remarkably, we find that ARUs implement precise SFA mechanisms from biological neurons, including fractional input differentiation. This suggests that even in simplified models, environmental pressures and objective-based optimization are enough for sophisticated biological mechanisms to emerge. We further find that task statistics lead to distinct orders of fractional differentiation in ARUs, prompting experimental predictions that an animal’s environment and behavior would selectively influence SFA tuning. While deep networks trained on perceptual tasks have been shown to predict tuning properties of single neurons (e.g. in the visual system) our result is the first, to our knowledge, to show that end-to-end optimization can recover dynamic coding mechanisms from the brain.