Relational Learning
relational learning
How Children Discover Mathematical Structure through Relational Mapping
A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.
Abstraction doesn't happen all at once (despite what some models of concept learning suggest)
In the past few years, there has been growing evidence that the basic ability for relational generalization starts in early infancy, with 3-month-olds seeming to learn relational abstractions with little training. Further, work with toddlers seem to suggest that relational generalizations are no more difficult than those based on objects, and they can readily consider both simultaneously. Likewise, causal learning research with adults suggests that people infer causal relationships at multiple levels of abstraction simultaneously as they learn about novel causal systems. These findings all appear counter to theories of concept learning that posit when concepts are first learned they tend to be concrete (tied to specific contexts and features) and abstraction proceeds incrementally as learners encounter more examples. The current talk will not question the veracity of any of these findings but will present several others from my and others’ research on relational learning that suggests that when the perceptual or conceptual content becomes more complex, patterns of incremental abstraction re-emerge. Further, the specific contexts and task parameters that support or hinder abstraction reveal the underlying cognitive processes. I will then consider whether the models that posit simultaneous, immediate learning at multiple levels of abstraction can accommodate these more complex patterns.
Infant Relational Learning - Interactions with Visual and Linguistic Factors
Humans are incredible learners, a talent supported by our ability to detect and transfer relational similarities between items and events. Spotting these common relations despite perceptual differences is challenging, yet there’s evidence that this ability begins early, with infants as young as 3 months discriminating same and different (Anderson et al., 2018; Ferry et al., 2015). How? To understand the underlying mechanisms, I examine how learning outcomes in the first year correspond with changes in input and in infant age. I discuss the commonalities in this process with that seen in older children and adults, as well as differences due to interactions with other maturing processes like language and visual attention.
Making neural nets simple enough to succeed at universal relational generalization
Traditional brain-style (connectionist) approaches basically hit a wall when it comes to relational cognition. As an alternative to the well-known approaches of structured connectionism and deep learning, I present an engine for relational pattern recognition based on minimalist reinterpretations of first principles of connectionism. Results of computational experiments will be discussed on problems testing relational learning and universal generalization.
Abstract Semantic Relations in Mind, Brain, and Machines
Abstract semantic relations (e.g., category membership, part-whole, antonymy, cause-effect) are central to human intelligence, underlying the distinctively human ability to reason by analogy. I will describe a computational project (Bayesian Analogy with Relational Transformations) that aims to extract explicit representations of abstract semantic relations from non-relational inputs automatically generated by machine learning. BART’s representations predict patterns of typicality and similarity for semantic relations, as well as similarity of neural signals triggered by semantic relations during analogical reasoning. In this approach, analogy emerges from the ability to learn and compare relations; mapping emerges later from the ability to compare patterns of relations.
The role of hippocampal CA1 in relational learning in mice
COSYNE 2022
The role of hippocampal CA1 in relational learning in mice
COSYNE 2022
Neural mechanisms of relational learning and fast knowledge reassembly
COSYNE 2025