Resources
Authors & Affiliations
Charlotte Volk, Christopher C. Pack, Shahab Bakhtiari
Abstract
Practicing a visual task leads to long-lasting perceptual improvements in humans and other animals. However, the generalization of Visual Learning (VL) to unseen conditions varies across different tasks. Particularly, it has been shown that learning ‘easy’ visual tasks leads to better generalization than learning ‘hard’ ones. Moreover, experimental evidence suggests that greater generalizability in VL can be achieved through sequential curriculum training, specifically by training on ‘easier’ versions of a task before progressing to ‘harder’ versions. Yet, a neurocomputational explanation for this curriculum learning phenomenon remains elusive. In this study, we leveraged an artificial neural network (ANN) model of VL to gain insight into the variability of generalization in VL. Our findings indicate that the subspace of visual representations that influence the model's behavior, known as the readout subspace, plays a pivotal role in generalization: training on easier tasks results in a lower-dimensional readout subspace and yields enhanced generalization. Furthermore, within the sequential curriculum learning paradigm, harder tasks can ‘piggyback’ on the low-dimensional subspace established through learning an initial, easier task, thereby enhancing their generalization. Finally, we find that when ‘easy’ and ‘hard’ tasks are interleaved in random order, an implicit curriculum emerges during training, though it is less sample-efficient than an explicit sequential curriculum.