Resources
Authors & Affiliations
Krubeal Danieli, Mikkel Lepperød, Marianne Fyhn
Abstract
During navigation, animals dynamically create rich representations of the environment, forming personalized cognitive maps. The hippocampal area CA1 features spatial cells that adapt based on behavior and internal states. Computational models have usually obtained spatial tuning by training a deep recurrent network for solving path integration, over numerous epochs, using backpropagation [1, 2, 3]. However, such methods do not closely align with the real-time local learning used by animals. Additionally, the formed spatial maps are solely oriented towards solving a specific task, and fail to capture the full richness of non-spatial features that might be relevant for more complex behaviours.
This study introduces a rate model that dynamically generates place cells as the agent navigates the environment. Online tuning is achieved through rapid Hebbian plasticity, lateral competition triggered by shortage of plasticity resources (modulators) [4], and homeostatic mechanisms. This model successfully creates a representation of visited areas and consolidates recurrent connections among similarly tuned neurons. Importantly, plasticity hyper-parameters such as the equilibrium concentration and decay time-constant of modulators influence the density of the place fields, impacting the encoding of information [5, 6].
We conducted a quantitative analysis of the model's representation capacity and observed that the formed place representation has a Shannon information content comparable to that of a network with hard-coded place fields. Furthermore, the model can capture the topological structure of the environment and, under normal conditions, the geometry of the neural manifold is approximately Euclidean. However, in the occurrence of salient events, the place cells become more clustered, resulting in a locally curved space [7]. Finally, we trained a reinforcement learning agent to solve goal-directed tasks in a variety of environments and gave it the ability to determine the plasticity hyper-parameters. The agent was able to learn to navigate to a goal location and define a policy through which it could adapt the neural geometry to better represent the space around it. The next step is to test the relevance of this ability for tasks with different input features and multiple objectives.
Overall, this model provides a biologically plausible framework for the generation of cognitive maps reflecting what is relevant for the agent.