Resources
Authors & Affiliations
Alexander Rivkind, Daniel Wolpert, Guillaume Hennequin, Mate Lengyel
Abstract
Humans maintain a repertoire of memories, which are continuously acquired, updated, and expressed as appropriate for the current context to meet behavioral demands. This continual learning remains a major challenge for artificial neural networks and has been suggested to require purpose-built mechanisms, such as memory replay or carefully designed learning rules. Sensorimotor learning provides a particularly useful testbed for theories of continual learning, as previous experiments have revealed a rich phenomenology of the complex and often intriguing dependence of motor adaptation on both recent and more distant experiences. While many of these phenomena were successfully captured by abstract Bayesian models, the neural principles that underlie them remain unknown. Here, we show that several hallmarks of continual motor learning are explained by a simple principle, without purpose-built mechanisms: continual error-driven learning in a neural network that is interacting with its environment in closed-loop. Specifically, at any given time, the input to the network is a combination of an efference copy of its actual output and a supervisory error signal from the previous time step (trial). Synaptic weights are continually updated to minimize the error between desired and actual output using standard gradient-based learning (backpropagation). We show through numerical simulations, supported by analytical derivations using methods from deep learning theory, that this network qualitatively exhibits several empirical signatures of continual learning: savings (faster learning upon repeated exposure to an environmental change), the ‘boiling frog’ effect (differences in memory expression when an environmental change is introduced gradually versus abruptly), and spontaneous as well as evoked recovery (memory expression in the apparent absence of the original learning context). Our work suggests that closed-loop interaction with the environment may play a crucial role in continual learning, and shows how empirical data from behavioral experiments can inform theories of neural network learning.