ePoster

Time cell encoding in deep reinforcement learning agents depends on mnemonic demands

Dongyan Lin,Blake Richards
COSYNE 2022(2022)
Lisbon, Portugal

Conference

COSYNE 2022

Lisbon, Portugal

Resources

Authors & Affiliations

Dongyan Lin,Blake Richards

Abstract

The representation of “what happened when” is central to encoding episodic and working memories. Recently discovered hippocampal “time cells” are theorized to provide the neural substrate for such representations by forming distinct sequences that encode both time elapsed and sensory content. However, while multiple neurophysiological studies have presented contradictory results on the role of “time cells” in memory, little work has directly addressed this discrepancy. Here, we hypothesize that this discrepancy is a result of different studies using tasks that involve different cognitive demands, which affect what the brain optimizes for, and thus, the emergence and informativeness of “time cells”. To test this, we trained deep reinforcement learning (DRL) agents on a simulated trial-unique nonmatch-to-location (TUNL) task, and analyzed the activities of artificial recurrent units using neuroscience-based methods. We show that, after training, representations resembling “time cells” naturally emerged in the artificial recurrent units. Furthermore, using a modified version of the TUNL task that did not require the agent to remember the sample during the delay period, we show that a task with memory demands is necessary for the “time cells” to encode information about the sensory stimuli for long periods of time. Our findings help to reconcile current discrepancies regarding the involvement of “time cells” in memory-encoding by providing a computational, normative explanation. Our modelling results also provide concrete experimental predictions for future studies.

Unique ID: cosyne-22/time-cell-encoding-deep-reinforcement-040bde41