ePoster

Using transfer learning to identify a neural system's algorithm

John Morrison, Benjamin Peters
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

John Morrison, Benjamin Peters

Abstract

Algorithms generate input-output mappings through step-by-step operations on representations. Cognitive scientists use algorithms to explain mental processes. For example, they use tree-search algorithms to explain planning, reinforcement learning algorithms to explain exploration, and Bayesian algorithms to explain categorization. The standard approach is to search for parts in the brain corresponding to the steps of the algorithm. But we haven’t been able to find many such parts. This has led some to deny that algorithms are useful to systems neuroscience, and to interpret neural systems using other frameworks. But this comes at a cost, because attributing algorithms to neural systems would help us predict, explain, and control them. Our alternative approach is to identify a neural system's algorithm by assessing how quickly it learns alternative input-output mappings, that is, its transfer learning profile. We use artificial neural networks to demonstrate that this proposal productively applies to multiple networks and tasks. In one experiment, we use transfer learning to determine whether a network implements 2x+2 or 2(x+1). In another experiment, we use transfer learning to determine whether a network separately or jointly detects two independent features of an object. We believe that transfer learning is a promising framework for integrating algorithms and neural networks, and thus for integrating cognitive science and systems neuroscience.

Unique ID: cosyne-25/using-transfer-learning-identify-fda07b67