ePoster

'Reusers' and 'Unlearners' display distinct effects of forgetting on reversal learning in neural networks

Jonas Elpelt, Jens-Bastian Eppler, Johannes Seiler, Simon Rumpel, Matthias Kaschube
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Jonas Elpelt, Jens-Bastian Eppler, Johannes Seiler, Simon Rumpel, Matthias Kaschube

Abstract

Neural networks need to continuously adapt their representations to survive in a constantly changing environment. To test behavioral flexibility reversal learning paradigms are often used [1]. This flexible adaptation of entrained representations can be mediated by unlearning and forgetting [2]. One possible factor for passive forgetting could be the spontaneous remodeling of neuronal circuitry [3]. Acknowledging that previous studies have presented conflicting views on the effects of forgetting during reversal learning, we hypothesize two potential scenarios: Hypothesis I ('Reusing'): Reusing memory from a previously learned task could be beneficial for the reversal task, because parts of the structure of the initially learned task could be reused [4,5,6]. In this case higher forgetting of the initial task would lead to slower learning of the reversal task. Hypothesis II ('Unlearning'): Reusing previously learned representations could have no advantage for reversal learning, because representations of the new task could interfere with previously learned associations [7,8,9]. In this case higher forgetting of the initial task could even accelerate learning of the reversal task. Here, we seek to explore the role of forgetting initially learned task representations on reversal learning behavior in both mice and artificial recurrent neural networks. We observe both scenarios in mice trained on an auditory go/no-go task and in artificial neural networks trained on a binary discrimination task, showing both beneficial reusing and disadvantageous unlearning of previously learned network configuration. We find that the performance of networks on the reversal learning task depends on the level of forgetting and on the random initial weight configuration. Moreover, perturbations in the model suggest that a small subset of highly changing synapses has a high impact on the reuse of representations. Our findings shed light on the use of initial representations during reversal learning and could provide insights into cognitive flexibility in both biological and artificial neural networks.

Unique ID: bernstein-24/reusers-unlearners-display-distinct-c691c083