Shared
shared representations
The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures
What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.
Discovering the individual differences in shared representations of neural dynamics and ethological behaviors
FENS Forum 2024