Cda
CDA
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
Smart perception?: Gestalt grouping, perceptual averaging, and memory capacity
It seems we see the world in full detail. However, the eye is not a camera nor is the brain a computer. Incredible metabolic constraints render us unable to encode more than a fraction of information available in each glance. Instead, our illusion of stable and complete perception is accomplished by parsimonious representation relying on natural order inherent in the surrounding environment. I will begin by discussing previous behavioral work from our lab demonstrating one such strategy by which the visual system represents average properties of Gestalt-grouped sets of individual objects, warping individual object representations toward the Gestalt-defined mean. I will then discuss on-going work using a behavioral index of averaging Gestalt-grouped information established in our previous work in conjunction with an ERP-index of VSTM capacity (the CDA) to measure whether the Gestalt-grouping and perceptual averaging strategy acts to boost memory capacity above the classic “four-item” limit. Finally, I will outline our pre-registered study to determine whether this perceptual strategy is indeed engaged in a “smart” manner under normal circumstances, or compromises fidelity for capacity by perceptually-averaging in trials with only four items that could otherwise be individually represented.