Topic spotlight
TopicWorld Wide

fluid intelligence

Discover seminars, jobs, and research tagged with fluid intelligence across World Wide.
4 curated items4 Seminars
Updated over 2 years ago
4 items · fluid intelligence
4 results
SeminarPsychology

Brain and Behavior: Employing Frequency Tagging as a Tool for Measuring Cognitive Abilities

Stefanie Peykarjou
University of Heidelberg
May 23, 2023

Frequency tagging based on fast periodic visual stimulation (FPVS) provides a window into ongoing visual and cognitive processing and can be leveraged to measure rule learning and high-level categorization. In this talk, I will present data demonstrating highly proficient categorization as living and non-living in preschool children, and characterize the development of this ability during infancy. In addition to associating cognitive functions with development, an intriguing question is whether frequency tagging also captures enduring individual differences, e.g. in general cognitive abilities. First studies indicate high psychometric quality of FPVS categorization responses (XU et al., Dzhelyova), providing a basis for research on individual differences. I will present results from a pilot study demonstrating high correlations between FPVS categorization responses and behavioral measures of processing speed and fluid intelligences. Drawing upon this first evidence, I will discuss the potential of frequency tagging for diagnosing cognitive functions across development.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jun 30, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscienceRecording

Predicting Patterns of Similarity Among Abstract Semantic Relations

Nick Ichien
UCLA
Jul 8, 2020

In this talk, I will present some data showing that people’s similarity judgments among word pairs reflect distinctions between abstract semantic relations like contrast, cause-effect, or part-whole. Further, the extent that individual participants’ similarity judgments discriminate between abstract semantic relations was linearly associated with both fluid and crystallized verbal intelligence, albeit more strongly with fluid intelligence. Finally, I will compare three models according to their ability to predict these similarity judgments. All models take as input vector representations of individual word meanings, but they differ in their representation of relations: one model does not represent relations at all, a second model represents relations implicitly, and a third model represents relations explicitly. Across the three models, the third model served as the best predictor of human similarity judgments suggesting the importance of explicit relation representation to fully account for human semantic cognition.