Resources
Authors & Affiliations
Akshay Bedhotiya, Emre Neftci
Abstract
Continual learning involves a network adapting to new data while retaining previously learned information. In Artificial Neural Networks , this process faces the challenge of catastrophic forgetting, where updating parameters with new data causes an exponential decline in accuracy over time for previously learned tasks. Several methods have been proposed to address this problem and enable continual learning on future neuromorphic hardware. One such neuroscience-inspired method is metaplasticity, which refers to a network's ability to change the learning capacity of its synapses based on its current state and past history.
In this work, we focus on implementing metaplasticity in binary networks by using the posterior of a Binary Bayesian network as a "hidden weight." The posterior represents the system's state, and its update rule is derived from Bayes' rule. By modifying this update rule to a chosen function of the posterior, we can change the learning rate and hence introduce metaplasticity. We demonstrate that various models of consolidation or gradual forgetting can be integrated using this framework by mapping the posterior to the learning parameter of the models.
To evaluate the temporal behavior of task memories, we model continual learning as a biased random walk on the posterior. Consequently, the expectation value of task accuracy over a sequence of random tasks can be described by a Fokker-Planck equation, with parameters defined by the metaplasticity model.
We conclude that the metaplastic behavior of the model or an update rule can be evaluated by the time dependence of the expectation value of the task outputs over the posterior.