← Back

Human Learning

Topic spotlight
TopicWorld Wide

human learning

Discover seminars, jobs, and research tagged with human learning across World Wide.
6 curated items6 Seminars
Updated 7 months ago
6 items · human learning
6 results
SeminarNeuroscience

Richly structured reward predictions in dopaminergic learning circuits

Angela J. Langdon
National Institute of Mental Health at National Institutes of Health (NIH)
May 16, 2023

Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.

SeminarNeuroscience

Computation noise in human learning and decision-making: origin, impact, function

Valentin Wyart
Ecole Normale Supérieure de Paris, France
May 30, 2021
SeminarNeuroscienceRecording

Structure-mapping in Human Learning

Dedre Gentner
Northwestern University
Apr 1, 2021

Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.

SeminarNeuroscience

Generalization guided exploration

Charley Wu
Max Planck
Dec 15, 2020

How do people learn in real-world environments where the space of possible actions can be vast or even infinite? The study of human learning has made rapid progress in past decades, from discovering the neural substrate of reward prediction errors, to building AI capable of mastering the game of Go. Yet this line of research has primarily focused on learning through repeated interactions with the same stimuli. How are humans able to rapidly adapt to novel situations and learn from such sparse examples? I propose a theory of how generalization guides human learning, by making predictions about which unobserved options are most promising to explore. Inspired by Roger Shepard’s law of generalization, I show how a Bayesian function learning model provides a mechanism for generalizing limited experiences to a wide set of novel possibilities, based on the simple principle that similar actions produce similar outcomes. This model of generalization generates predictions about the expected reward and underlying uncertainty of unexplored options, where both are vital components in how people actively explore the world. This model allows us to explain developmental differences in the explorative behavior of children, and suggests a general principle of learning across spatial, conceptual, and structured domains.