Resources
Authors & Affiliations
Margaret Lane, Merkourios Simos, James Priestley
Abstract
Sparse coding is an effective computational strategy for ensuring robust pattern storage and memory recall of correlated patterns. In the hippocampus, many neurons fire in localised receptive fields that tile an animal's physical environment, forming a sparse spatial population code. But the underlying computations and their functional purpose remain unclear. Various ANNs have been proposed to learn localised, manifold-tiling codes, purportedly due to explicit constraints on network activity levels or to specific task objectives, such as predicting future observations. Here, we show that in simple navigation simulations these models often fail to learn place fields. We introduce a novel autoencoder that is regularised by the dimensionality of its latent representations, and find that it robustly learns localised representations across tasks. Comparing model classes, we find divergent representational geometry and learning dynamics between networks that do or do not learn place fields. In particular, effective competition between neurons and compression of correlated, nearby sensory inputs drive place field formation. We highlight the interaction between activation function and dimensionality, where sparse solutions naturally arise in non-negative networks when balancing input reconstruction and dimensionality expansion. Using large-scale neural recordings of mice navigating in virtual reality, we connect these learning processes to place field evolution in novel environments, where we measure pattern decorrelation and competitive dynamics that increase the dimensionality of the neural code with learning. Overall, we suggest that place coding may reflect generalised computations that optimize high dimensional representations from the statistics of ongoing experience, ameliorating computational flexibility and robust memory storage.