Navigation Tasks
navigation tasks
Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation
Studies of the mouse visual system have revealed a variety of visual brain areas in a roughly hierarchical arrangement, together with a multitude of behavioral capacities, ranging from stimulus-reward associations, to goal-directed navigation, and object-centric discriminations. However, an overall understanding of the mouse’s visual cortex organization, and how this organization supports visual behaviors, remains unknown. Here, we take a computational approach to help address these questions, providing a high-fidelity quantitative model of mouse visual cortex. By analyzing factors contributing to model fidelity, we identified key principles underlying the organization of mouse visual cortex. Structurally, we find that comparatively low-resolution and shallow structure were both important for model correctness. Functionally, we find that models trained with task-agnostic, unsupervised objective functions, based on the concept of contrastive embeddings were substantially better than models trained with supervised objectives. Finally, the unsupervised objective builds a general-purpose visual representation that enables the system to achieve better transfer on out-of-distribution visual, scene understanding and reward-based navigation tasks. Our results suggest that mouse visual cortex is a low-resolution, shallow network that makes best use of the mouse’s limited resources to create a light-weight, general-purpose visual system – in contrast to the deep, high-resolution, and more task-specific visual system of primates.
Abstraction and Inference in the Prefrontal Hippocampal Circuitry
The cellular representations and computations that allow rodents to navigate in space have been described with beautiful precision. In this talk, I will show that some of these same computations can be found in humans doing tasks that appear very different from spatial navigation. I will describe some theory that allows us to think about spatial and non-spatial problems in the same framework, and I will try to use this theory to give a new perspective on the beautiful spatial computations that inspired it. The overall goal of this work is to find a framework where we can talk about complicated non-spatial inference problems with the same precision that is only currently available in space.