Human
human intelligence
Zoran Tiganj
The College of Arts and Sciences and the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington invite applications for three tenured Associate Professor positions in one or more of the following areas: human intelligence, artificial intelligence, and machine learning to begin in Fall 2024 or after. Appointments will be in one or more departments, including Cognitive Science, Computer Science, Informatics, and Psychological and Brain Sciences. The positions are part of a new initiative that aims to transform our understanding of human and artificial intelligence, centered around the new Mind Brain Machine Quadrangle and the Luddy Artificial Intelligence Center. IU has long been an international leader in research on cognition across humans, animals, and artificial systems, and how intelligence manifests in embodied cognition. These hires build on existing strengths to position IU at the forefront of new research innovations in our understanding of human and animal cognition, the development of intelligent computing technologies, and the use of machine learning applied to a wide range of phenomena.
Zoran Tiganj, PhD
The College of Arts and Sciences and the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington invite applications for multiple open-rank, tenured or tenure-track faculty positions in one or more of the following areas: artificial intelligence, human intelligence, and machine learning to begin in Fall 2025 or after. Appointments will be in one or more departments, including Cognitive Science, Computer Science, Informatics, Intelligent Systems Engineering, Mathematics, and Psychological and Brain Sciences. We encourage applications from scholars who apply interdisciplinary perspectives across these fields to a variety of domains, including cognitive science, computational social sciences, computer vision, education, engineering, healthcare, mathematics, natural language processing, neuroscience, psychology, robotics, virtual reality, and beyond. Reflecting IU’s strong tradition of interdisciplinary research, we encourage diverse perspectives and innovative research that may intersect with or extend beyond these areas. The positions are part of a new university-wide initiative that aims to transform our understanding of human and artificial intelligence, involving multiple departments and schools, as well as the new Luddy Artificial Intelligence Center.
Do large language models solve verbal analogies like children do?
Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.
Implementing structure mapping as a prior in deep learning models for abstract reasoning
Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.
Cross Domain Generalisation in Humans and Machines
Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.
Abstract Semantic Relations in Mind, Brain, and Machines
Abstract semantic relations (e.g., category membership, part-whole, antonymy, cause-effect) are central to human intelligence, underlying the distinctively human ability to reason by analogy. I will describe a computational project (Bayesian Analogy with Relational Transformations) that aims to extract explicit representations of abstract semantic relations from non-relational inputs automatically generated by machine learning. BART’s representations predict patterns of typicality and similarity for semantic relations, as well as similarity of neural signals triggered by semantic relations during analogical reasoning. In this approach, analogy emerges from the ability to learn and compare relations; mapping emerges later from the ability to compare patterns of relations.
A Reservoir Model of Explicit Human Intelligence
Neuromatch 5