Abstract Semantic Relations
abstract semantic relations
Abstract Semantic Relations in Mind, Brain, and Machines
Abstract semantic relations (e.g., category membership, part-whole, antonymy, cause-effect) are central to human intelligence, underlying the distinctively human ability to reason by analogy. I will describe a computational project (Bayesian Analogy with Relational Transformations) that aims to extract explicit representations of abstract semantic relations from non-relational inputs automatically generated by machine learning. BART’s representations predict patterns of typicality and similarity for semantic relations, as well as similarity of neural signals triggered by semantic relations during analogical reasoning. In this approach, analogy emerges from the ability to learn and compare relations; mapping emerges later from the ability to compare patterns of relations.
Predicting Patterns of Similarity Among Abstract Semantic Relations
In this talk, I will present some data showing that people’s similarity judgments among word pairs reflect distinctions between abstract semantic relations like contrast, cause-effect, or part-whole. Further, the extent that individual participants’ similarity judgments discriminate between abstract semantic relations was linearly associated with both fluid and crystallized verbal intelligence, albeit more strongly with fluid intelligence. Finally, I will compare three models according to their ability to predict these similarity judgments. All models take as input vector representations of individual word meanings, but they differ in their representation of relations: one model does not represent relations at all, a second model represents relations implicitly, and a third model represents relations explicitly. Across the three models, the third model served as the best predictor of human similarity judgments suggesting the importance of explicit relation representation to fully account for human semantic cognition.