ePoster

Insight moments in neural networks and humans

Anika T. Löwe,Andrew Saxe,Nicolas W. Schuck,Léo Touzo,Paul S. Muhle-Karbe,Christopher Summerfield
COSYNE 2022(2022)
Lisbon, Portugal
Presented: Mar 17, 2022

Conference

COSYNE 2022

Lisbon, Portugal

Resources

Authors & Affiliations

Anika T. Löwe,Andrew Saxe,Nicolas W. Schuck,Léo Touzo,Paul S. Muhle-Karbe,Christopher Summerfield

Abstract

The success of neural networks derives from the fact that they can learn useful representations of observed inputs. However, the dynamics of when and how fast a network will discover good representations are not well understood, even when network training is governed by gradient descent. The potential complexity of learning dynamics is particularly apparent in so called insight moments, when useful task representations are discovered suddenly following a delay during which no learning is noticeable. Such insights are commonly observed in animals and humans, where they are often taken to reflect explicit strategy discovery or shifts of attention, but whether they arise in neural networks trained with incremental gradient descent is unknown. Here, we study how and when insight moments arise in neural networks and humans. We employ a two-alternative forced choice task in which input feature relevance changes after initial training, such that previously learned input representations can be relearned to improve behavioural efficiency. We reasoned that this non-stationary feature relevance poses unique computational challenges that could trigger insight-like learning dynamics. In line with previous research, we show that about half of human volunteers performing this task showed insight-like learning about newly relevant features. A simple linear neural network with three nodes trained on the same task with baseline performance matched to humans and regularised gate modulation on the two input nodes, exhibited abrupt learning dynamics resembling insight-like behaviour - despite its gradual learning rule and simple architecture. Finally, we show analytically that L1 regularisation of gain factors is a core mechanism behind insight-like learning in neural networks, whereby frequency and delay depended on the regularisation parameter lambda. Our results suggest that insight phenomena can arise from regularised gradual learning mechanisms and shed light on learning dynamics and representation formation in intelligent agents more generally.

Unique ID: cosyne-22/insight-moments-neural-networks-humans-2ce3071e