Resources
Authors & Affiliations
Ian Hawes, Matthew Nolan
Abstract
What circuit mechanisms underlie the encoding and retrieval of spatial memory?In standard models, neurons have discrete fields that map onto locations, which are associated with actions. It is unclear if this feedforward mechanism is the only way in which spatial memories can be stored. Here, by reverse-engineering trained recurrent neural networks (RNNs) we identify alternative mechanisms for storage of location memories.We train RNNs on a spatial navigation task where an agent has to navigate to a hidden goal location. Reverse engineering these networks reveals their activity is similar to that recorded in the medial entorhinal cortex of mice performing the same task. The activity of the network is low-dimensional, track position is represented on a circular manifold, and speed along the height of that manifold. To understand the computations underlying memory of multiple reward locations, we train the agent to navigate to a context-dependent reward location. Surprisingly, the manifold was reused across contexts, and memory was implemented by changing the gain at which velocity pushed the neural trajectory to the reward zone, such that the reward zone was represented at the same manifold location across reward zones.Thus, rather than storing context-specific information by associating locations with outcomes, neural circuits can instead learn context-specific spatial dynamics. Applied more generally, our results suggest a novel framework for understanding semantic memory, for example learning of concepts, in which specific instances are associated with specific dynamics of a general low dimensional ‘concept model’.