Human Cognition
human cognition
Steven Frankland, Jonathan Phillips
Our labs seek to better understand the normative principles and mechanistic underpinnings of human cognition. Some topics of particular interest are: how we dynamically assemble new thoughts, how we think about alternative possibilities, and how/why processing capacity is often so limited. We encourage students interested in pursuing computational, behavioral, and neuroscientific work on these and related questions to apply to Dartmouth’s Psychological and Brain Sciences PhD Program. In addition to psychology and neuroscience, our labs draw broadly on work in computer science, philosophy, and linguistics. The Cognitive Science Program at Dartmouth maintains close ties to these departments, which provide a breadth of resources for Ph.D. candidates affiliated with the Program in Cognitive Science.
N/A
A post-doctoral position in theoretical neuroscience is open to explore the impact of cardiac inputs on cortical dynamics. Understanding the role of internal states in human cognition has become a hot topic, with a wealth of experimental results but limited attempts at analyzing the computations that underlie the link between bodily organs and brain. Our particular focus is on elucidating how the different mechanisms for heart-to-cortex coupling (e.g., phase-resetting, gating, phasic arousal,..) can account for human behavioral and neural data, from somatosensory detection to more high-level concepts such as self-relevance, using data-based dynamical models.
Haim Sompolinsky, Kenneth Blum
The Swartz Program at Harvard University seeks applicants for a postdoctoral fellow in theoretical and computational neuroscience. Based on a grant from the Swartz Foundation, a Swartz postdoctoral fellowship is available at Harvard University with a start date in the summer or fall of 2024. Postdocs join a vibrant group of theoretical and experimental neuroscientists plus theorists in allied fields at Harvard’s Center for Brain Science. The Center for Brain Science includes faculty doing research on a wide variety of topics, including neural mechanisms of rodent learning, decision-making, and sex-specific and social behaviors; reinforcement learning in rodents and humans; human motor control; behavioral and fMRI studies of human cognition; circuit mechanisms of learning and behavior in worms, larval flies, and larval zebrafish; circuit mechanisms of individual differences in flies and humans; rodent and fly olfaction; inhibitory circuit development; retinal circuits; and large-scale reconstruction of detailed brain circuitry.
Tom Griffiths
The Department of Computer Science invites applications for a postdoctoral or more senior research position in Computational Cognitive Science, under the direction of Tom Griffiths. The position requires a Ph.D. and is focused on using mathematical, computational, and behavioral methods to understand the nature of intelligence. Specific research areas of interest include applications of large language models in cognitive science and use of Bayesian methods and metalearning to understand human cognition and AI systems.
Tom Griffiths
Princeton University seeks to hire Postdoctoral Research Associates as part of a new initiative exploring the applications and implications of Artificial Intelligence. The Natural and Artificial Minds initiative takes advantage of advances in AI to study natural and artificial minds in parallel, creating the opportunity to make discoveries about ourselves and to find new ways to understand and improve AI systems.
Coraline Rinn Iordan
The University of Rochester’s Department of Brain and Cognitive Sciences seeks to hire an outstanding early-career candidate in the area of Human Cognition. Areas of study may center on any aspect of higher-level cognitive processes such as decision-making, learning and memory, concepts, language and communication, development, reasoning, metacognition, and collective cognition. We particularly welcome applications from candidates researching cognition in human subjects through behavioral, computational or neuroimaging methods. Successful candidates will develop a research program that establishes new collaborations within the department and across the university, and will also be part of a university-wide community engaged in graduate and undergraduate education.
Janet Hsiao
The joint research groups of Professors Janet Hsiao (SOSC), Stuart Gietel-Basten (SOSC), Xiaojuan Ma (CSE), Hao Chen (CSE), and Yangqiu Song (CSE) at Hong Kong University of Science and Technology have openings for multiple new research staff positions at different levels, ranging from Post-doctoral Fellows to Research Assistants. The primary responsibility of the appointees is to develop human-centric benchmarks of AI and robotics technology for geriatric care on the dimensions of being robust, safe, privacy-preserved, helpful, understandable, trustworthy, and sustainable through investigating their comparability to human cognition and compatibility with human society. Post-doctoral Fellows/Research Associates will also assist in managing and coordinating for the development of the collaborative projects and supervising Research Assistants/research students.
Cameron Buckner
The Department of Philosophy in the College of Liberal Arts and Sciences at the University of Florida invites applications for a Post-doctoral Associate to work on research projects in the philosophy and ethics of artificial intelligence led by Dr. Cameron Buckner, Professor of Philosophy and the Donald F. Cronin Chair in the Humanities beginning August 16, 2025. We are especially interested in individuals with both philosophical background and an understanding of recent machine learning technologies to work on topics related to explainability, interpretability, and/or the use of machine learning methods to model human cognition, as well as related ethical and epistemic issues. The department has an established strength in the philosophy of AI, and the associate will have the opportunity to interact and potentially collaborate with other department members working in this area, including David Grant, Duncan Purves, and Amber Ross, as well as numerous AI researchers in other disciplines. The University of Florida has for the last several years been engaged in an ambitious artificial intelligence initiative for research and teaching, including interdisciplinary research—which includes access to HiPerGator, one of the most powerful high-performance computers at a US public university and NaviGator AI, an API providing easy access to many state of the art Large Language Models and multimodal generative AI systems.
How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience
This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.
Spatial matching tasks for insect minds: relational similarity in bumblebees
Understanding what makes human unique is a fundamental research drive for comparative psychologists. Cognitive abilities such as theory of mind, cooperation or mental time travel have been considered uniquely human. Despite empirical evidence showing that animals other than humans are able (to some extent) of these cognitive achievements, findings are still heavily contested. In this context, being able to abstract relations of similarity has also been considered one of the hallmarks of human cognition. While previous research has shown that other animals (e.g., primates) can attend to relational similarity, less is known about what invertebrates can do. In this talk, I will present a series of spatial matching tasks that previously were used with children and great apes and that I adapted for use with wild-caught bumblebees. The findings from these studies suggest striking similarities between vertebrates and invertebrates in their abilities to attend to relational similarity.
From spikes to factors: understanding large-scale neural computations
It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.
The functional architecture of the human entorhinal-hippocampal circuitry
Cognitive functions like episodic memory require the formation of cohesive representations. Critical for that process is the entorhinal-hippocampal circuitry’s interaction with cortical information streams and the circuitry’s inner communication. With ultra-high field functional imaging we investigated the functional architecture of the human entorhinal-hippocampal circuitry. We identified an organization that is consistent with convergence of information in anterior and lateral entorhinal subregions and the subiculum/CA1 border while keeping a second route specific for scene processing in a posterior-medial entorhinal subregion and the distal subiculum. Our findings agree with information flow along information processing routes which functionally split the entorhinal-hippocampal circuitry along its transversal axis. My talk will demonstrate how ultra-high field imaging in humans can bridge the gap between anatomical and electrophysiological findings in rodents and our understanding of human cognition. Moreover, I will point out the implications that basic research on functional architecture has for cognitive and clinical research perspectives.
Neural circuits for novel choices and for choice speed and accuracy changes in macaques
While most experimental tasks aim at isolating simple cognitive processes to study their neural bases, naturalistic behaviour is often complex and multidimensional. I will present two studies revealing previously uncharacterised neural circuits for decision-making in macaques. This was possible thanks to innovative experimental tasks eliciting sophisticated behaviour, bridging the human and non-human primate research traditions. Firstly, I will describe a specialised medial frontal circuit for novel choice in macaques. Traditionally, monkeys receive extensive training before neural data can be acquired, while a hallmark of human cognition is the ability to act in novel situations. I will show how this medial frontal circuit can combine the values of multiple attributes for each available novel item on-the-fly to enable efficient novel choices. This integration process is associated with a hexagonal symmetry pattern in the BOLD response, consistent with a grid-like representation of the space of all available options. We prove the causal role played by this circuit by showing that focussed transcranial ultrasound neuromodulation impairs optimal choice based on attribute integration and forces the subjects to default to a simpler heuristic decision strategy. Secondly, I will present an ongoing project addressing the neural mechanisms driving behaviour shifts during an evidence accumulation task that requires subjects to trade speed for accuracy. While perceptual decision-making in general has been thoroughly studied, both cognitively and neurally, the reasons why speed and/or accuracy are adjusted, and the associated neural mechanisms, have received little attention. We describe two orthogonal dimensions in which behaviour can vary (traditional speed-accuracy trade-off and efficiency) and we uncover independent neural circuits concerned with changes in strategy and fluctuations in the engagement level. The former involves the frontopolar cortex, while the latter is associated with the insula and a network of subcortical structures including the habenula.
Neural replay in human cognition
The Limits of Causal Reasoning in Human and Machine Learning
A key purpose of causal reasoning by individuals and by collectives is to enhance action, to give humans yet more control over their environment. As a result, causal reasoning serves as the infrastructure of both thought and discourse. Humans represent causal systems accurately in some ways, but also show some systematic biases (we tend to neglect causal pathways other than the one we are thinking about). Even when accurate, people’s understanding of causal systems tends to be superficial; we depend on our communities for most of our causal knowledge and reasoning. Nevertheless, we are better causal reasoners than machines. Modern machine learners do not come close to matching human abilities.
Towards a Theory of Human Visual Reasoning
Many tasks that are easy for humans are difficult for machines. In particular, while humans excel at tasks that require generalising across problems, machine systems notably struggle. One such task that has received a good amount of attention is the Synthetic Visual Reasoning Test (SVRT). The SVRT consists of a range of problems where simple visual stimuli must be categorised into one of two categories based on an unknown rule that must be induced. Conventional machine learning approaches perform well only when trained to categorise based on a single rule and are unable to generalise without extensive additional training to tasks with any additional rules. Multiple theories of higher-level cognition posit that humans solve such tasks using structured relational representations. Specifically, people learn rules based on structured representations that generalise to novel instances quickly and easily. We believe it is possible to model this approach in a single system which learns all the required relational representations from scratch and performs tasks such as SVRT in a single run. Here, we present a system which expands the DORA/LISA architecture and augments the existing model with principally novel components, namely a) visual reasoning based on the established theories of recognition by components; b) the process of learning complex relational representations by synthesis (in addition to learning by analysis). The proposed augmented model matches human behaviour on SVRT problems. Moreover, the proposed system stands as perhaps a more realistic account of human cognition, wherein rather than using tools that has been shown successful in the machine learning field to inform psychological theorising, we use established psychological theories to inform developing a machine system.
Achieving Abstraction: Early Competence & the Role of the Learning Context
Children's emerging ability to acquire and apply relational same-different concepts is often cited as a defining feature of human cognition, providing the foundation for abstract thought. Yet, young learners often struggle to ignore irrelevant surface features to attend to structural similarity instead. I will argue that young children have--and retain--genuine relational concepts from a young age, but tend to neglect abstract similarity due to a learned bias to attend to objects and their properties. Critically, this account predicts that differences in the structure of children's environmental input should lead to differences in the type of hypotheses they privilege and apply. I will review empirical support for this proposal that has (1) evaluated the robustness of early competence in relational reasoning, (2) identified cross-cultural differences in relational and object bias, and (3) provided evidence that contextual factors play a causal role in relational reasoning. Together, these studies suggest that the development of abstract thought may be more malleable and context-sensitive than initially believed.
Representations of abstract relations in infancy
Abstract relations are considered the pinnacle of human cognition, allowing analogical and logical reasoning, and possibly setting humans apart from other animal species. Such relations cannot be represented in a perceptual code but can easily be represented in a propositional language of thought, where relations between objects are represented by abstract discrete symbols. Focusing on the abstract relations same and different, I will show that (1) there is a discontinuity along ontogeny with respect to the representations of abstract relations, but (2) young infants already possess representations of same and different. Finally, (3) I will investigate the format of representation of abstract relations in young infants, arguing that those representations are not discrete, but rather built by juxtaposing abstract representations of entities.
Mental Simulation, Imagination, and Model-Based Deep RL
Mental simulation—the capacity to imagine what will or what could be—is a salient feature of human cognition, playing a key role in a wide range of cognitive abilities. In artificial intelligence, the last few years have seen the development of methods which are analogous to mental models and mental simulation. In this talk, I will discuss recent methods in deep learning for constructing such models from data and learning to use them via reinforcement learning, and compare such approaches to human mental simulation. While a number of challenges remain in matching the capacity of human mental simulation, I will highlight some recent progress on developing more compositional and efficient model-based algorithms through the use of graph neural networks and tree search.
Sensory-motor control, cognition and brain evolution: exploring the links
Drawing on recent findings from evolutionary anthropology and neuroscience, professor Barton will lead us through the amazing story of the evolution of human cognition. Usingstatistical, phylogenetic analyses that tease apart the variation associated with different neural systems and due to different selection pressures, he will be addressing intriguing questions like ‘Why are there so many neurons in the cerebellum?’, ‘Is the neocortex the ‘intelligent’ bit of the brain?’, and ‘What explains that the recognition by humans of emotional expressions is disrupted by trancranial magnetic stimulation of the somatosensory cortex?’ Could, as professor Barton suggests, the cerebellum -modestly concealed beneath the volumetrically dominating neocortex and largely ignored- turn out to be the Cinderella of the study of brain evolution?
Exploration beyond bandits
Machine learning researchers frequently focus on human-level performance, in particular in games. However, in these applications human (or human-level) behavior is commonly reduced to a simple dot on a performance graph. Cognitive science, in particular theories of learning and decision making, could hold the key to unlock what is behind this dot, thereby gaining further insights into human cognition and the design principles of intelligent algorithms. However, cognitive experiments commonly focus on relatively simple paradigms such as restricted multi-armed bandit tasks. In this talk, I will argue that cognitive science can turn its lens to more complex scenarios to study exploration in real-world domains and online games. I will show in one large data set of online food delivery orders and across many online games how current cognitive theories of learning and exploration can describe human behavior in the wild, but also how these tasks demand us to expand our theoretical toolkit to describe a rich repertoire of real-world behaviors such as empowerment and fun.
Multitask performance humans and deep neural networks
Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.
The Structural Anchoring of Spontaneous Analogies
It is generally acknowledged that analogy is a core mechanism of human cognition, but paradoxically, analogies based on structural similarities would rarely be implemented spontaneously (e.g. without an explicit invitation to compare two representations). The scarcity of deep spontaneous analogies is at odds with the demonstration that familiar concepts from our daily-life are spontaneously used to encode the structure of our experiences. Based on this idea, we will present experimental works highlighting the predominant role of structural similarities in analogical retrieval. The educational stakes lurking behind the tendency to encode the problem’s structures through familiar concepts will also be addressed.
Human voluntary action: from thought to movement
The ability to decide and act autonomously is a distinctive feature of human cognition. From a motor neurophysiology viewpoint, these 'voluntary' actions can be distinguished by the lack of an obvious triggering sensory stimulus: the action is considered to be a product of thought, rather than a reflex result of a specific input. A reverse engineering approach shows that such actions are caused by neurons of the primary cortex, which in turn depend on medial frontal areas, and finally a combination of prefrontal cortical connections and subcortical drive from basal ganglia loops. One traditional marker of voluntary action is the EEG readiness potential (RP), recorded over the frontal cortex prior to voluntary actions. However, the interpretation of this signal remains controversial, and very few experimental studies have attempted to link the RP to the thought process that lead to voluntary action. In this talk, I will report new studies that show learning an internal model about the optimum delay at which to act influences the amplitude of the RP. More generally, a scientific understanding of voluntariness and autonomy will require new neurocognitive paradigms connecting thought and action.
What can we further learn from the brain for artificial intelligence?
Deep learning is a prime example of how brain-inspired computing can benefit development of artificial intelligence. But what else can we learn from the brain for bringing AI and robotics to the next level? Energy efficiency and data efficiency are the major features of the brain and human cognition that today’s deep learning has yet to deliver. The brain can be seen as a multi-agent system of heterogeneous learners using different representations and algorithms. The flexible use of reactive, model-free control and model-based “mental simulation” appears to be the basis for computational and data efficiency of the brain. How the brain efficiently acquires and flexibly combines prediction and control modules is a major open problem in neuroscience and its solution should help developments of more flexible and autonomous AI and robotics.
Domain Specificity in the Human Brain: What, Whether, and Why?
The last quarter century has provided extensive evidence that some regions of the human cortex are selectively engaged in processing a single specific domain of information, from faces, places, and bodies to language, music, and other people’s thoughts. This work dovetails with earlier theories in cognitive science highlighting domain specificity in human cognition, development, and evolution. But many questions remain unanswered about even the clearest cases of domain specificity in the brain, the selective engagement of the FFA, PPA, and EBA in the perception of faces, places, and bodies, respectively. First, these claims lack precision, saying little about what is computed and how, and relying on human judgements to decide what counts as a face, place, or body. Second, they provide no account of the reliably varying responses of these regions across different “preferred” images, or across different “nonpreferred” images for each category. Third, the category selectivity of each region is vulnerable to refutation if any of the vast set of as-yet-untested nonpreferred images turns out to produce a stronger response than preferred images for that region. Fourth, and most fundamentally, they provide no account of why, from a computational point of view, brains should exhibit this striking degree of functional specificity in the first place, and why we should have the particular visual specializations we do, for faces, places, and bodies, but not (apparently) for food or snakes. The advent of convolutional neural networks (CNNs) to model visual processing in the ventral pathway has opened up many opportunities to address these long-standing questions in new ways. I will describe ongoing efforts in our lab to harness CNNs to do just that.
Exploring the impact of interthalamic adhesion on human cognition: Insights from healthy subjects and thalamic stroke patients
FENS Forum 2024