Resources
Authors & Affiliations
Samia Mohinta,Dabal Pedamonti,Martin Dimitrov,Hugo Malagon-Vina,Stephane Ciocchi,Rui Ponte Costa
Abstract
The ability to continually adapt to the environment is key for survival. How the brain learns continuously while retaining previous knowledge is not known. Using a combination of neural and behavioural data analysis together with deep learning modelling, we studied the role of the hippocampus in continual spatial reinforcement learning. First, we introduce a deep recurrent Q-learning agent consistent with the hippocampal architecture (hcDRQN) along with three control models: a non-recurrent model (hcDQN) and two standard machine continual learning algorithms. We trained all these reinforcement learning agents in a virtual environment with partial observability mimicking the experimental neuroscience setup. Our results show that not only does the hcDRQN model achieve the best performance across tasks, but that is the model that best captures animal behaviour during continually interleaved ego- and allocentric tasks. Next, we used demixed Principal Component Analysis (dPCA) to analyse neural recordings from 612 neurons of hippocampal CA1 as animals were trained over multiple days on the same ego-allocentric tasks. We found that CA1 neurons encode reward, task rule and time specific components. Next, we performed the same dPCA analyses in our reinforcement learning models, which show that hcDRQN is the model that best captures the neural data. Finally, to test how well the different models learnt ego-allocentric tasks we conducted a range of generalisation tests in which hcDRQN clearly outperformed all the other models. Overall, our results suggest that hippocampal networks support continual learning in the brain.