Cookies
We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.
Prof.
NYU
Showing your local timezone
Schedule
Friday, April 21, 2023
2:30 AM America/New_York
Domain
NeuroscienceHost
NYU Swartz
Duration
70 minutes
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
Brenden Lake
Prof.
NYU
Contact & Resources
neuro
Digital Minds: Brain Development in the Age of Technology examines how our increasingly connected world shapes mental and cognitive health. From screen time and social media to virtual interactions, t
neuro
neuro
Alpha synuclein and Lrrk2 are key players in Parkinson's disease and related disorders, but their normal role has been confusing and controversial. Data from acute gene-editing based knockdown, follow