Abstract Reasoning
abstract reasoning
N/A
Two Postdoctoral Research Associates in Neurorobotics are required for a period of 48 months to work on the Horizon/InnovateUK project “PRIMI: Performance in Robots Interaction via Mental Imagery. This is a collaborative project of the University of Manchester’s Cognitive Robotics Lab with various academic and industry partners in the UK and Europe. PRIMI will synergistically combine research and development in neurophysiology, psychology, machine intelligence, cognitive mechatronics, neuromorphic engineering, and humanoid robotics to build developmental models of higher-cognition abilities – mental imagery, abstract reasoning, and theory of mind – boosted by energy- efficient event-driven computing and sensing. You will carry out research on robot neuro/cognitive architectures, using a combination of machine learning and robotics methodologies. You will be working collaboratively as part of the Cognitive Robotics Lab at the Department of Computer Science at the University of Manchester under the supervision of Professor Angelo Cangelosi.
Angelo Cangelosi
A Postdoctoral Research Associates in Neuromorphic Systems and/or Computational Neuroscience for robotics is required for a period of 3.5 years to work on the Horizon/InnovateUK project “PRIMI: Performance in Robots Interaction via Mental Imagery. This is a collaborative project of the University of Manchester’s Cognitive Robotics Lab with various academic and industry partners in the UK and Europe. PRIMI will synergistically combine research and development in neurophysiology, psychology, machine intelligence, cognitive mechatronics, neuromorphic engineering, and humanoid robotics to build developmental models of higher-cognition abilities – mental imagery, abstract reasoning, and theory of mind – boosted by energy-efficient event-driven computing and sensing. You will carry out research on the design of neuromorphic systems models for robotics. The postdoc will work collaboratively with the other postdocs and PhD students in the PRIMI project. This post requires expertise in computational neuroscience (e.g. spiking neural networks) for robotics and/or neuromorphic systems.
Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach
Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.
Implementing structure mapping as a prior in deep learning models for abstract reasoning
Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.
Zero-shot visual reasoning with probabilistic analogical mapping
There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.