Latest

SeminarNeuroscienceRecording

Children’s inference of verb meanings: Inductive, analogical and abductive inference

Mutsumi Imai
Keio University
May 18, 2022

Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jul 1, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscienceRecording

Is Rule Learning Like Analogy?

Stella Christie
Tsinghua University
Jul 16, 2020

Humans’ ability to perceive and abstract relational structure is fundamental to our learning. It allows us to acquire knowledge all the way from linguistic grammar to spatial knowledge to social structures. How does a learner begin to perceive structure in the world? Why do we sometimes fail to see structural commonalities across events? To begin to answer these questions, I attempt to bridge two large, yet somewhat separate research traditions in understanding human’s structural abstraction: rule learning (Marcus et al., 1999) and analogical learning (Gentner, 1989). On the one hand, rule learning research has shown humans’ domain-general ability and ease—as early as 7-month-olds—to abstract structure from a limited experience. On the other hand, analogical learning works have shown robust constraints in structural abstraction: young learners prefer object similarity over relational similarity. To understand this seeming paradox between ease and difficulty, we conducted a series of studies using the classic rule learning paradigm (Marcus et al., 1999) but with an analogical (object vs. relation) twist. Adults were presented with 2-minute sentences or events (syllables or shapes) containing a rule. At test, they had to choose between rule abstraction and object matches—the same syllable or shape they saw before. Surprisingly, while in the absence of object matches adults were perfectly capable of abstracting the rule, their ability to do so declined sharply when object matches were present. Our initial results suggest that rule learning ability may be subject to the usual constraints and signatures of analogical learning: preference to object similarity can dampen rule generalization. Humans’ abstraction is also concrete at the same time.

object similarity coverage

3 items

Seminar3
Domain spotlight

Explore how object similarity research is advancing inside Neuro.

Visit domain