ePoster

Measuring and Controlling Solution Degeneracy across Task-Trained RNNs

Ann Huangand 2 co-authors

Presenting Author

Conference
COSYNE 2025 (2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Ann Huang, Satpreet Singh, Kanaka Rajan

Abstract

Task-trained recurrent neural networks (RNNs) are versatile models of dynamical processes widely used in machine learning and neuroscience. While RNNs are easily trained to perform a wide range of tasks, the nature and extent of the degeneracy in the resultant solutions (i.e., the variability across trained RNNs) remain poorly understood. Here, we provide a unified framework for analyzing degeneracy across three levels: behavior, neural dynamics, and weight space. We analyzed RNNs trained on diverse tasks across machine learning and neuroscience domains, including N-bit flip-flop, sine wave generation, delayed discrimination, and path integration. Our key finding is that the variability across RNN solutions depends primarily on network capacity and task complexity. We introduce information-theoretic measures to quantify task complexity and demonstrate that increasing task complexity consistently reduces degeneracy in neural dynamics and generalization behavior while increasing degeneracy in weight space. Furthermore, we provide several strategies to control solution degeneracy at both the dynamical and weight level, such as changing task complexity, adding auxiliary loss, modifying network capacity, and imposing structural constraints on RNN training.

Unique ID: cosyne-25/measuring-controlling-solution-degeneracy-f236df26