ePoster

Effect of experience on context-dependent learning in recurrent networks

John Bowler, Hyunwoo Lee, James Heys
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

John Bowler, Hyunwoo Lee, James Heys

Abstract

The entorhinal cortex (EC) sits at a critical junction where sensory input and higher-level processing converge and is necessary for learning interval timing and relating durations in distinct temporal contexts[1]. “Time cells” in the medial subdivision of the EC (MEC) tile task durations during a temporal variant of the Delay Non-match to Sample (tDNMS) task in which animals must detect non-matching durations. Over training, subpopulations of MEC time cells become context-dependent showing differential activity across different trial types. While attractor models of EC activity predict experimentally-observed sequential time cell-like responses[2], it is unclear what mechanisms drive network activity along learned context-dependent trajectories. To answer this question, we trained recurrent neural networks (RNNs) on the tDNMS task (Fig 1a,b). The units within the network exhibit context-dependent sequences of sparsely active time-fields, highly reminiscent of time cell activity[1] (Fig 1c,d). Further, training curriculum has a lasting impact on model performance. Pre-training the network on both non-match temporal contexts results in faster learning of the full task with fewer errors (Fig 1e). The pre-trained model is also more robust to noise and penalties on overall network activity. This pre-training is nearly identical to shaping tasks in animal experiments. Training an RNN on a simpler Cue-Response (CR) task also results in the formation of temporal sequences spanning the cue presentation and response windows (Fig 1f). Sorting the units in the network by their peak firing time reveals that the underlying mechanism driving this activity is partially an asymmetric pattern of nearby-excitation surround-inhibition in the recurrent weight matrix (Fig 1j,k). These connection weights are in agreement with attractor models where the asymmetric connectivity pushes activity through the network. Despite the similar sequential activation pattern, pre-training an RNN on the CR task impairs its ability to subsequently learn the tDNMS task (Fig 1g), indicating additional structure embedded within the connectivity of the recurrent connections is necessary to learn context-dependent trajectories(Fig 1h,i). Our results reveal how prior experience with distinct contexts shapes network connectivity in ways that facilitate learning of future, more complex tasks. Additionally, the RNN model suggests novel protocols for shaping behavior prior to starting animal experience.

Unique ID: bernstein-24/effect-experience-context-dependent-6c48dbb2