Resources
Authors & Affiliations
Farhad Pashakhanloo
Abstract
Mechanisms of representational drift are still under investigation. Under one set of hypotheses, drift is a result of noisy continual learning in the brain. Previous computational studies have demonstrated that drift could occur under different types of noise such as synaptic noise, or sample stochasticity. However, exact contribution of various noises remains elusive. For example, it’s unclear what’s the contribution of stimuli that are relevant for the task, and the ones that are continually present in the environment but the agent learns to ignore in a context. In this study, using theory and simulations, we demonstrate that the presence of task-irrelevant stimuli could itself create fluctuation and drift in the network parameters, which further leads to drift of task-relevant stimuli representations across time. By analytically solving for drift under different settings, we show that this phenomenon holds across architectures and learning rules for both supervised and unsupervised learning. Specifically, we study the multi-dimensional Oja network, a similarity matching network with a biologically-plausible Hebbian learning rule, an autoencoder, and a two-layer supervised network, both trained with backpropagation. The results show that in all cases the drift rate increases as a function of the variance and the dimension of the data in the task-irrelevant subspace. Overall, our study links the structure of stimuli, task and learning to the phenomenon of drift and could pave the way for using drift as a signal for uncovering underlying computation in the brain.