ePoster

Probing learning dynamics via task-relevant manifold untangling: A case study in lazy versus rich learning

SueYeon Chung, Chi-Ning Chou, Hang Le
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

SueYeon Chung, Chi-Ning Chou, Hang Le

Abstract

Integrating task-relevant information into neural representations is essential for both biological and artificial intelligence. However, current experimental limitations in tracking synaptic weight changes make it challenging to understand learning mechanisms in animals. To address this, we propose a framework based on neural manifold geometry to study representation learning directly from neural activity. A neural manifold represents the collection of population responses to the same task condition (e.g., a cat manifold for responses to images of cats) and can be estimated from recorded activity. We hypothesize that, as learning progresses, these manifolds gradually untangle, allowing downstream neurons to better differentiate task conditions. Analyzing manifold properties (e.g., radius, dimension, and alignment) offers deeper insights into the structure and dynamics of learning. We apply our framework to investigate lazy learning (where the circuit primarily adjusts readout weights) versus rich learning (where it learns task-relevant internal representations) (Chizat et. al., 2019). Our contributions are threefold. First, we use manifold capacity theory to demonstrate mathematically and empirically that task-relevant manifolds progressively untangle as richer representations develop. Second, we show that manifold geometric measures capture differences in learned features and representational changes throughout training, helping characterize learning rules at the manifold level. Finally, we apply our method to recurrent neural networks (RNNs) solving cognitive neuroscience tasks, revealing how changes in manifold geometry reflect the influence of circuit properties---such as how initial weight rank shapes learning dynamics. In summary, our data-driven framework leverages manifold geometry to quantify representation learning and provides a basis for formulating testable hypotheses about candidate learning mechanisms.

Unique ID: cosyne-25/probing-learning-dynamics-task-relevant-9c1e79a8