ePoster

Automated discovery of interpretable cognitive programs underlying reward-guided behavior

Pablo Samuel Castro, Nenad Tomasev, Ankit Anand, Navodita Sharma, Alexander Novikov, Kuba Perlin, Noemi Elteto, Siddhant Jain, Kyle Levin, Maria Eckstein, Will Dabney, Nathaniel Daw, Kimberly Stachenfeld, Kevin J Miller
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Pablo Samuel Castro, Nenad Tomasev, Ankit Anand, Navodita Sharma, Alexander Novikov, Kuba Perlin, Noemi Elteto, Siddhant Jain, Kyle Levin, Maria Eckstein, Will Dabney, Nathaniel Daw, Kimberly Stachenfeld, Kevin J Miller

Abstract

A major goal of neuroscience is to discover mathematical models that describe how the brain implements cognitive processes like learning and decision-making. Historically, the field has relied on handcrafted models, often motivated by normative considerations about optimal choice behavior, which are then modified to fit the idiosyncrasies of animal behavior. These models re-quire substantial effort to devise, and are often limited in their ability to fully predict animal behavior. However, these programmatic models provide insight and afford interpretation: variables like“prediction error” are computationally meaningful. Data-driven approaches invert this process by considering a very large model space, in conjunction with sufficiently large datasets, in the hope of discovering models that better capture the data [1]. A key challenge at the interface of data-driven and model-driven approaches is articulating a comprehensive model space in which models are both identifiable from data and human-interpretable as scientific theories [2]. Our goal is to discover programmatic models that describe reward-guided behavior by repurposingFunSearch [3], a recently developed tool that leverages Large Language Models (LLMs) to generate and evolve Python programs. We find that CogFunSearch reliably discovers programs that match or outperform the state-of-the-art RL cognitive model for predicting choices by rats in a reward-guided decision-making task (2-arm drifting bandit) [4]. CogFunSearch is capable of leveraging human-provided information in the prompt, with more informative prompts resulting in better discovered programs. We characterize the complexity of the discovered programs, and find that while some programs are hard to parse, other programs yield intriguing insights. Broadly, these results provide early insights into the use of LLM-based program discovery tools to identify models of cognition.

Unique ID: cosyne-25/automated-discovery-interpretable-f2ce9341