Humans
humans
Prof Zoe Kourtzi
Post-doctoral position in Cognitive Computational Neuroscience at the Adaptive Brain Lab. The role involves combining high field brain imaging (7T fMRI, MR Spectroscopy), electrophysiology (EEG), computational modelling (machine learning, reinforcement learning) and interventions (TMS, tDCS, pharmacology) to understand network dynamics for learning and brain plasticity. The research programme bridges work across scales (local circuits, global networks) and species (humans, rodents) to uncover the neurocomputations that support learning and brain plasticity.
Dr Silvia Maggi, Professor Mark Humphries, Dr Hazem Toutonji
A fully-funded PhD is available with Dr Silvia Maggi and Professor Mark Humphries (University of Nottingham) and Dr Hazem Toutonji (University of Sheffield). The project involves understanding how subjects respond to dynamic environments and requires approaches that can track subject's choice strategies at the resolution of single trials. The team recently developed a Bayesian inference algorithm that enables trial-resolution tracking of learning and exploration during learning. This project will build on this work to solve crucial problems of determining which of a set of behavioural strategies a subject is using and how to incorporate evidence uncertainty into its detection of the learning of strategies and transitions between them. Using the extended algorithm on datasets of rodents and humans performing decision tasks will let us test a range of hypotheses for how correct decisions are learnt and what innate strategies are used.
Dr Silvia Maggi, Professor Mark Humphries, Dr Hazem Toutonji
A fully-funded PhD is available with Dr Silvia Maggi and Professor Mark Humphries (University of Nottingham) and Dr Hazem Toutonji (University of Sheffield). The project involves understanding how subjects respond to dynamic environments and requires approaches that can track subject's choice strategies at the resolution of single trials. The project will build on a recently developed Bayesian inference algorithm that enables trial-resolution tracking of learning and exploration during learning. The project aims to solve crucial problems of determining which of a set of behavioural strategies a subject is using and how to incorporate evidence uncertainty into its detection of the learning of strategies and transitions between them. Using the extended algorithm on datasets of rodents and humans performing decision tasks will let us test a range of hypotheses for how correct decisions are learnt and what innate strategies are used.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Neural circuits underlying sleep structure and functions
Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Expanding mechanisms and therapeutic targets for neurodegenerative disease
A hallmark pathological feature of the neurodegenerative diseases amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is the depletion of RNA-binding protein TDP-43 from the nucleus of neurons in the brain and spinal cord. A major function of TDP-43 is as a repressor of cryptic exon inclusion during RNA splicing. By re-analyzing RNA-sequencing datasets from human FTD/ALS brains, we discovered dozens of novel cryptic splicing events in important neuronal genes. Single nucleotide polymorphisms in UNC13A are among the strongest hits associated with FTD and ALS in human genome-wide association studies, but how those variants increase risk for disease is unknown. We discovered that TDP-43 represses a cryptic exon-splicing event in UNC13A. Loss of TDP-43 from the nucleus in human brain, neuronal cell lines and motor neurons derived from induced pluripotent stem cells resulted in the inclusion of a cryptic exon in UNC13A mRNA and reduced UNC13A protein expression. The top variants associated with FTD or ALS risk in humans are located in the intron harboring the cryptic exon, and we show that they increase UNC13A cryptic exon splicing in the face of TDP-43 dysfunction. Together, our data provide a direct functional link between one of the strongest genetic risk factors for FTD and ALS (UNC13A genetic variants), and loss of TDP-43 function. Recent analyses have revealed even further changes in TDP-43 target genes, including widespread changes in alternative polyadenylation, impacting expression of disease-relevant genes (e.g., ELP1, NEFL, and TMEM106B) and providing evidence that alternative polyadenylation is a new facet of TDP-43 pathology.
Neurobiological Pathways to Tau-dependent Pathology: Perspectives from flies to humans
Gene regulatory mechanisms of neocortex development and evolution
The neocortex is considered to be the seat of higher cognitive functions in humans. During its evolution, most notably in humans, the neocortex has undergone considerable expansion, which is reflected by an increase in the number of neurons. Neocortical neurons are generated during development by neural stem and progenitor cells. Epigenetic mechanisms play a pivotal role in orchestrating the behaviour of stem cells during development. We are interested in the mechanisms that regulate gene expression in neural stem cells, which have implications for our understanding of neocortex development and evolution, neural stem cell regulation and neurodevelopmental disorders.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Dynamic neurochemistry in conscious humans during stereoEEG monitoring
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
This decision matters: Sorting out the variables that lead to a single choice
Towards Human Systems Biology of Sleep/Wake Cycles: Phosphorylation Hypothesis of Sleep
The field of human biology faces three major technological challenges. Firstly, the causation problem is difficult to address in humans compared to model animals. Secondly, the complexity problem arises due to the lack of a comprehensive cell atlas for the human body, despite its cellular composition. Lastly, the heterogeneity problem arises from significant variations in both genetic and environmental factors among individuals. To tackle these challenges, we have developed innovative approaches. These include 1) mammalian next-generation genetics, such as Triple CRISPR for knockout (KO) mice and ES mice for knock-in (KI) mice, which enables causation studies without traditional breeding methods; 2) whole-body/brain cell profiling techniques, such as CUBIC, to unravel the complexity of cellular composition; and 3) accurate and user-friendly technologies for measuring sleep and awake states, exemplified by ACCEL, to facilitate the monitoring of fundamental brain states in real-world settings and thus address heterogeneity in human.
Inducing short to medium neuroplastic effects with Transcranial Ultrasound Stimulation
Sound waves can be used to modify brain activity safely and transiently with unprecedented precision even deep in the brain - unlike traditional brain stimulation methods. In a series of studies in humans and non-human primates, I will show that Transcranial Ultrasound Stimulation (TUS) can have medium- to long-lasting effects. Multiple read-outs allow us to conclude that TUS can perturb neuronal tissues up to 2h after intervention, including changes in local and distributed brain network configurations, behavioural changes, task-related neuronal changes and chemical changes in the sonicated focal volume. Combined with multiple neuroimaging techniques (resting state functional Magnetic Resonance Imaging [rsfMRI], Spectroscopy [MRS] and task-related fMRI changes), this talk will focus on recent human TUS studies.
Great ape interaction: Ladyginian but not Gricean
Non-human great apes inform one another in ways that can seem very humanlike. Especially in the gestural domain, their behavior exhibits many similarities with human communication, meeting widely used empirical criteria for intentionality. At the same time, there remain some manifest differences. How to account for these similarities and differences in a unified way remains a major challenge. This presentation will summarise the arguments developed in a recent paper with Christophe Heintz. We make a key distinction between the expression of intentions (Ladyginian) and the expression of specifically informative intentions (Gricean), and we situate this distinction within a ‘special case of’ framework for classifying different modes of attention manipulation. The paper also argues that the attested tendencies of great ape interaction—for instance, to be dyadic rather than triadic, to be about the here-and-now rather than ‘displaced’—are products of its Ladyginian but not Gricean character. I will reinterpret video footage of great ape gesture as Ladyginian but not Gricean, and distinguish several varieties of meaning that are continuous with one another. We conclude that the evolutionary origins of linguistic meaning lie in gradual changes in not communication systems as such, but rather in social cognition, and specifically in what modes of attention manipulation are enabled by a species’ cognitive phenotype: first Ladyginian and in turn Gricean. The second of these shifts rendered humans, and only humans, ‘language ready’.
A recurrent network model of planning predicts hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.
Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception
To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.
How Intermittent Bioenergetic Challenges Enhance Brain and Body Health
Humans and other animals evolved in habitats fraught with a range of environmental challenges to their bodies and brains. Accordingly, cells and organ systems possess adaptive stress-responsive signaling pathways that enable them to not only withstand environmental challenges, but also to prepare for future challenges and function more efficiently. These phylogenetically conserved processes are the foundation of the hormesis principle in which repeated exposures to low to moderate amounts of an environmental challenge improve cellular and organismal fitness. Here I describe cellular and molecular mechanisms by which cells in the brain and body respond to intermittent fasting and exercise in ways that enhance performance and counteract aging and disease processes. Switching back and forth between adaptive stress response (during fasting and exercise) and growth and plasticity (eating, resting, sleeping) modes enhances the performance and resilience of various organ systems. While pharmacological interventions that engage a particular hormetic mechanism are being developed, it seems unlikely that any will prove superior to fasting and exercise.
Social and non-social learning: Common, or specialised, mechanisms? (BACN Early Career Prize Lecture 2022)
The last decade has seen a burgeoning interest in studying the neural and computational mechanisms that underpin social learning (learning from others). Many findings support the view that learning from other people is underpinned by the same, ‘domain-general’, mechanisms underpinning learning from non-social stimuli. Despite this, the idea that humans possess social-specific learning mechanisms - adaptive specializations moulded by natural selection to cope with the pressures of group living - persists. In this talk I explore the persistence of this idea. First, I present dissociations between social and non-social learning - patterns of data which are difficult to explain under the domain-general thesis and which therefore support the idea that we have evolved special mechanisms for social learning. Subsequently, I argue that most studies that have dissociated social and non-social learning have employed paradigms in which social information comprises a secondary, additional, source of information that can be used to supplement learning from non-social stimuli. Thus, in most extant paradigms, social and non-social learning differ both in terms of social nature (social or non-social) and status (primary or secondary). I conclude that status is an important driver of apparent differences between social and non-social learning. When we account for differences in status, we see that social and non-social learning share common (dopamine-mediated) mechanisms.
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
Decoding mental conflict between reward and curiosity in decision-making
Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.
Movement planning as a window into hierarchical motor control
The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.
A recurrent network model of planning explains hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Seeing slowly - how inner retinal photoreceptors support vision and circadian rhythms in mice and humans
Epigenomic (re)programming of the brain and behavior by ovarian hormones
Rhythmic changes in sex hormone levels across the ovarian cycle exert powerful effects on the brain and behavior, and confer female-specific risks for neuropsychiatric conditions. In this talk, Dr. Kundakovic will discuss the role of fluctuating ovarian hormones as a critical biological factor contributing to the increased depression and anxiety risk in women. Cycling ovarian hormones drive brain and behavioral plasticity in both humans and rodents, and the talk will focus on animal studies in Dr. Kundakovic’s lab that are revealing the molecular and receptor mechanisms that underlie this female-specific brain dynamic. She will highlight the lab’s discovery of sex hormone-driven epigenetic mechanisms, namely chromatin accessibility and 3D genome changes, that dynamically regulate neuronal gene expression and brain plasticity but may also prime the (epi)genome for psychopathology. She will then describe functional studies, including hormone replacement experiments and the overexpression of an estrous cycle stage-dependent transcription factor, which provide the causal link(s) between hormone-driven chromatin dynamics and sex-specific anxiety behavior. Dr. Kundakovic will also highlight an unconventional role that chromatin dynamics may have in regulating neuronal function across the ovarian cycle, including in sex hormone-driven X chromosome plasticity and hormonally-induced epigenetic priming. In summary, these studies provide a molecular framework to understand ovarian hormone-driven brain plasticity and increased female risk for anxiety and depression, opening new avenues for sex- and gender-informed treatments for brain disorders.
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
The sense of agency as an explorative role in our perception and action
The sense of agency refers to the subjective feeling of controlling one's own behavior and, through them, external events. Why is this subjective feeling important for humans? Is it just a by-product of our actions? Previous studies have shown that the sense of agency can affect the intensity of sensory input because we predict the input from our motor intention. However, my research has found that the sense of agency plays more roles than just predictions. It enhances perceptual processes of sensory input and potentially helps to harvest more information about the link between the external world and the self. Furthermore, our recent research found both indirect and direct evidence that the sense of agency is important for people's exploratory behaviors, and this may be linked to proximal exploitations of one's control in the environment. In this talk, I will also introduce the paradigms we use to study the sense of agency as a result of perceptual processes, and our findings of individual differences in this sense and the implications.
Obesity and Brain – Bidirectional Influences
The regulation of body weight relies on homeostatic mechanisms that use a combination of internal signals and external cues to initiate and terminate food intake. Homeostasis depends on intricate communication between the body and the hypothalamus involving numerous neural and hormonal signals. However, there is growing evidence that higher-level cognitive function may also influence energy balance. For instance, research has shown that BMI is consistently linked to various brain, cognitive, and personality measures, implicating executive, reward, and attentional systems. Moreover, the rise in obesity rates over the past half-century is attributed to the affordability and widespread availability of highly processed foods, a phenomenon that contradicts the idea that food intake is solely regulated by homeostasis. I will suggest that prefrontal systems involved in value computation and motivation act to limit food overconsumption when food is scarce or expensive, but promote over-eating when food is abundant, an optimum strategy from an economic standpoint. I will review the genetic and neuroscience literature on the CNS control of body weight. I will present recent studies supporting a role of prefrontal systems in weight control. I will also present contradictory evidence showing that frontal executive and cognitive findings in obesity may be a consequence not a cause of increased hunger. Finally I will review the effects of obesity on brain anatomy and function. Chronic adiposity leads to cerebrovascular dysfunction, cortical thinning, and cognitive impairment. As the most common preventable risk factor for dementia, obesity poses a significant threat to brain health. I will conclude by reviewing evidence for treatment of obesity in adults to prevent brain disease.
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Spatial matching tasks for insect minds: relational similarity in bumblebees
Understanding what makes human unique is a fundamental research drive for comparative psychologists. Cognitive abilities such as theory of mind, cooperation or mental time travel have been considered uniquely human. Despite empirical evidence showing that animals other than humans are able (to some extent) of these cognitive achievements, findings are still heavily contested. In this context, being able to abstract relations of similarity has also been considered one of the hallmarks of human cognition. While previous research has shown that other animals (e.g., primates) can attend to relational similarity, less is known about what invertebrates can do. In this talk, I will present a series of spatial matching tasks that previously were used with children and great apes and that I adapted for use with wild-caught bumblebees. The findings from these studies suggest striking similarities between vertebrates and invertebrates in their abilities to attend to relational similarity.
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
Effect of Different Influences on Temporal Error Monitoring
Metacognition has long been defined as “cognition about cognition”. One of its aspects is the error monitoring ability, which includes being aware of one’s own errors without external feedback. This ability is mostly investigated in two-alternative forced choice tasks, where the performance has all or none nature in terms of accuracy. The previous literature documents the effect of different influences on the error monitoring ability, such as working memory, feedback and sensorimotor involvement. However, these demonstrations fall short of generalizing to the real life scenarios where the errors often have a magnitude and a direction. To bridge this gap, recent studies showed that humans could keep track of the magnitude and the direction of their errors in temporal, spatial and numerical domains in two metrics: confidence and short-long/few-more judgements. This talk will cover how the documented effects that are obtained in the two alternative forced choices tasks apply to the temporal error monitoring ability. Finally, how magnitude and direction monitoring (i.e., confidence and short-long judgements) can be differentiated as the two indices of temporal error monitoring ability will be discussed.
Hallucinating mice, dopamine and immunity; towards mechanistic treatment targets for psychosis
Hallucinations are a core symptom of psychotic disorders and have traditionally been difficult to study biologically. We developed a new behavioral computational approach to measure hallucinations-like perception in humans and mice alike. Using targeted neural circuit manipulations, we identified a causal role for striatal dopamine in mediating hallucination-like perception. Building on this, we currently investigate the neural and immunological upstream regulators of these dopaminergic circuits with the goal to identify new biological treatment targets for psychosis
Learning to see stuff
Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.
Toward the neural basis of joint attention: studies in humans and monkeys
PIEZO2 in somatosensory neurons coordinates gastrointestinal transit
The transit of food through the gastrointestinal tract is critical for nutrient absorption and survival, and the gastrointestinal tract has the ability to initiate motility reflexes triggered by luminal distention. This complex function depends on the crosstalk between extrinsic and intrinsic neuronal innervation within the intestine, as well as local specialized enteroendocrine cells. However, the molecular mechanisms and the subset of sensory neurons underlying the initiation and regulation of intestinal motility remain largely unknown. Here, we show that humans lacking PIEZO2 exhibit impaired bowel sensation and motility. Piezo2 in mouse dorsal root but not nodose ganglia is required to sense gut content, and this activity slows down food transit rates in the stomach, small intestine, and colon. Indeed, Piezo2 is directly required to detect colon distension in vivo. Our study unveils the mechanosensory mechanisms that regulate the transit of luminal contents throughout the gut, which is a critical process to ensure proper digestion, nutrient absorption, and waste removal. These findings set the foundation of future work to identify the highly regulated interactions between sensory neurons, enteric neurons and non- neuronal cells that control gastrointestinal motility.
Sampling the environment with body-brain rhythms
Since Darwin, comparative research has shown that most animals share basic timing capacities, such as the ability to process temporal regularities and produce rhythmic behaviors. What seems to be more exclusive, however, are the capacities to generate temporal predictions and to display anticipatory behavior at salient time points. These abilities are associated with subcortical structures like basal ganglia (BG) and cerebellum (CE), which are more developed in humans as compared to nonhuman animals. In the first research line, we investigated the basic capacities to extract temporal regularities from the acoustic environment and produce temporal predictions. We did so by adopting a comparative and translational approach, thus making use of a unique EEG dataset including 2 macaque monkeys, 20 healthy young, 11 healthy old participants and 22 stroke patients, 11 with focal lesions in the BG and 11 in the CE. In the second research line, we holistically explore the functional relevance of body-brain physiological interactions in human behavior. Thus, a series of planned studies investigate the functional mechanisms by which body signals (e.g., respiratory and cardiac rhythms) interact with and modulate neurocognitive functions from rest and sleep states to action and perception. This project supports the effort towards individual profiling: are individuals’ timing capacities (e.g., rhythm perception and production), and general behavior (e.g., individual walking and speaking rates) influenced / shaped by body-brain interactions?
LifePerceives
Life Perceives is a symposium bringing together scientists and artists for an open exploration of how “perception” can be understood as a phenomenon that does not only belong to humans, or even the so-called “higher organisms”, but exists across the entire spectrum of life in a myriad of forms. The symposium invites leading practitioners from the arts and sciences to present unique insights through short talks, open discussions, and artistic interventions that bring us slightly closer to the life worlds of plants and fungi, microbial communities and immune systems, cuttlefish and crows. What do we mean when we talk about perception in other species? Do other organisms have an experience of the world? Or does our human-centred perspective make understanding other forms of life on their own terms an impossible dream? Whatever your answers to these questions may be, we hope to unsettle them, and leave you more curious than when you arrived.
Roots of Analogy
Can nonhuman animals perceive the relation-between-relations? This intriguing question has been studied over the last 40 years; nonetheless, the extent to which nonhuman species can do so remains controversial. Here, I review empirical evidence suggesting that pigeons, parrots, crows, and baboons join humans in reliably acquiring and transferring relational matching-to-sample (RMTS). Many theorists consider that RMTS captures the essence of analogy, because basic to analogy is appreciating the ‘relation between relations.’ Factors affecting RMTS performance include: prior training experience, the entropy of the sample stimulus, and whether the items that serve as sample stimuli can also serve as choice stimuli.
Analyzing artificial neural networks to understand the brain
In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.
Motor contribution to auditory temporal predictions
Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.
Prefrontal top-down projections control context-dependent strategy selection
The rules governing behavior often vary with behavioral contexts. As a result, an action rewarded in one context may be discouraged in another. Animals and humans are capable of switching between behavioral strategies under different contexts and acting adaptively according to the variable rules, a flexibility that is thought to be mediated by the prefrontal cortex (PFC). However, how the PFC orchestrates the context-dependent switch of strategies remains unclear. Here we show that pathway-specific projection neurons in the medial PFC (mPFC) differentially contribute to context-instructed strategy selection. In mice trained in a decision-making task in which a previously established rule and a newly learned rule are associated with distinct contexts, the activity of mPFC neurons projecting to the dorsomedial striatum (mPFC-DMS) encodes the contexts and further represents decision strategies conforming to the old and new rules. Moreover, mPFC-DMS neuron activity is required for the context-instructed strategy selection. In contrast, the activity of mPFC neurons projecting to the ventral midline thalamus (mPFC-VMT) does not discriminate between the contexts, and represents the old rule even if mice have adopted the new one. Furthermore, these neurons act to prevent the strategy switch under the new rule. Our results suggest that mPFC-DMS neurons promote flexible strategy selection guided by contexts, whereas mPFC-VMT neurons favor fixed strategy selection by preserving old rules.
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
On the link between conscious function and general intelligence in humans and machines
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.
It’s All About Motion: Functional organization of the multisensory motion system at 7T
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will suggest that this mechanism may play a role in evaluating multisensory responses for visual, tactile and auditory stimuli in hMT+.
Exploring emotion in the expression of ape gesture
Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins. All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.
The multimodal number sense: spanning space, time, sensory modality, and action
Humans and other animals can estimate rapidly the number of items in a scene, flashes or tones in a sequence and motor actions. Adaptation techniques provide clear evidence in humans for the existence of specialized numerosity mechanisms that make up the numbersense. This sense of number is truly general, encoding the numerosity of both spatial arrays and sequential sets, in vision and audition, and interacting strongly with action. The adaptation (cross-sensory and cross-format) acts on sensory mechanisms rather than decisional processes, pointing to a truly general sense.
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.
Learning Relational Rules from Rewards
Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.
Social Curiosity
In this lecture, I would like to share with the broad audience the empirical results gathered and the theoretical advancements made in the framework of the Lendület project entitled ’The cognitive basis of human sociality’. The main objective of this project was to understand the mechanisms that enable the unique sociality of humans, from the angle of cognitive science. In my talk, I will focus on recent empirical evidence in the study of three fundamental social cognitive functions (social categorization, theory of mind and social learning; mainly from the empirical lenses of developmental psychology) in order to outline a theory that emphasizes the need to consider their interconnectedness. The proposal is that the ability to represent the social world along categories and the capacity to read others’ minds are used in an integrated way to efficiently assess the epistemic states of fellow humans by creating a shared representational space. The emergence of this shared representational space is both the result of and a prerequisite to efficient learning about the physical and social environment.
Learning predictive maps in the brain for spatial navigation
The predictive map hypothesis provides a promising framework to model representations in the hippocampal formation. I will introduce a tractable implementation of a predictive map called the successor representation (SR), before presenting data showing that rats and humans display SR-like navigational choices on a novel open-field maze. Next, I will show how such a predictive map could be implemented using spatial representations found in the hippocampal formation, before finally presenting how such learning might be well approximated by phenomena that exist in the spatial memory system - namely spike-timing dependent plasticity and theta phase precession.
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf
Development and evolution of neuronal connectivity
In most animal species including humans, commissural axons connect neurons on the left and right side of the nervous system. In humans, abnormal axon midline crossing during development causes a whole range of neurological disorders ranging from congenital mirror movements, horizontal gaze palsy, scoliosis or binocular vision deficits. The mechanisms which guide axons across the CNS midline were thought to be evolutionary conserved but our recent results suggesting that they differ across vertebrates. I will discuss the evolution of visual projection laterality during vertebrate evolution. In most vertebrates, camera-style eyes contain retinal ganglion cell (RGC) neurons projecting to visual centers on both sides of the brain. However, in fish, RGCs are thought to only innervate the contralateral side. Using 3D imaging and tissue clearing we found that bilateral visual projections exist in non-teleost fishes. We also found that the developmental program specifying visual system laterality differs between fishes and mammals. We are currently using various strategies to discover genes controlling the development of visual projections. I will also present ongoing work using 3D imaging techniques to study the development of the visual system in human embryo.
Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions
Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.
Brain-muscle signaling coordinates exercise adaptations in Drosophila
Chronic exercise is a powerful intervention that lowers the incidence of most age-related diseases while promoting healthy metabolism in humans. However, illness, injury or age prevent many humans from consistently exercising. Thus, identification of molecular targets that can mimic the benefits of exercise would be a valuable tool to improve health outcomes of humans with neurodegenerative or mitochondrial diseases, or those with enforced sedentary lifestyles. Using a novel exercise platform for Drosophila, we have identified octopaminergic neurons as a key subset of neurons that are critical for the exercise response, and shown that periodic daily stimulation of these neurons can induce a systemic exercise response in sedentary flies. Octopamine is released into circulation where it signals through various octopamine receptors in target tissues and induces gene expression changes similar to exercise. In particular, we have identified several key molecules that respond to octopamine in skeletal muscle, including the mTOR modulator Sestrin, the PGC-1α homolog Spargel, and the FNDC5/Irisin homolog Iditarod. We are currently testing these molecules as potential therapies for multiple diseases that reduce mobility, including the PolyQ disease SCA2 and the mitochondrial disease Barth syndrome.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Epigenome regulation in neocortex expansion and generation of neuronal subtypes
Evolutionarily, the expansion of the human neocortex accounts for many of the unique cognitive abilities of humans. This expansion appears to reflect the increased proliferative potential of basal progenitors (BPs) in mammalian evolution. Further cortical progenitors generate both glutamatergic excitatory neurons (ENs) and GABAergic inhibitory interneurons (INs) in human cortex, whereas they produce exclusively ENs in rodents. The increased proliferative capacity and neuronal subtype generation of cortical progenitors in mammalian evolution may have evolved through epigenetic alterations. However, whether or how the epigenome in cortical progenitors differs between humans and other species is unknown. Here, we report that histone H3 acetylation is a key epigenetic regulation in BP profiling of sorted BPs, we show that H3K9 acetylation is low in murine BPs and high in amplification, neuronal subtype generation and cortical expansion. Through epigenetic profiling of sorted BPs, we show that H3K9 acetylation is low in murine BPs and high in human BPs. Elevated H3K9ac preferentially increases BP proliferation, increasing the size and folding of the normally smooth mouse neocortex. Furthermore, we found that the elevated H3 acetylation activates expression of IN genes in in developing mouse cortex and promote proliferation of IN progenitor-like cells in cortex of Pax6 mutant mouse models. Mechanistically, H3K9ac drives the BP amplification and proliferation of these IN progenitor-like cells by increasing expression of the evolutionarily regulated gene, TRNP1. Our findings demonstrate a previously unknown mechanism that controls neocortex expansion and generation of neuronal subtypes. Keywords: Cortical development, neurogenesis, basal progenitors, cortical size, gyrification, excitatory neuron, inhibitory interneuron, epigenetic profiling, epigenetic regulation, H3 acetylation, H3K9ac, TRNP1, PAX6
Behavioural probing of learned statistical structure in humans
COSYNE 2022
Identifying the control strategies of monkeys and humans in a virtual balancing task
COSYNE 2022
Identifying the control strategies of monkeys and humans in a virtual balancing task
COSYNE 2022
Insight moments in neural networks and humans
COSYNE 2022
Insight moments in neural networks and humans
COSYNE 2022
Integrating information and reward into subjective value: humans, monkeys, and the lateral habenula
COSYNE 2022
Integrating information and reward into subjective value: humans, monkeys, and the lateral habenula
COSYNE 2022
Near-optimal time investments under uncertainty in humans, rats, and mice
COSYNE 2022
Near-optimal time investments under uncertainty in humans, rats, and mice
COSYNE 2022
The timescale and magnitude of 1/f aperiodic activity decrease with cortical depth in humans, macaques, and mice
COSYNE 2022
The timescale and magnitude of 1/f aperiodic activity decrease with cortical depth in humans, macaques, and mice
COSYNE 2022
Alignment of ANN Language Models with Humans After a Developmentally Realistic Amount of Training
COSYNE 2023
Biased AI systems produce biased humans
COSYNE 2023
Intracranial electrophysiological evidence for a novel neuro-computational mechanism of cognitive flexibility in humans
COSYNE 2023
Spatial-frequency channels for object recognition by neural networks are twice as wide as those of humans
COSYNE 2023
Violations of transitivity disrupt relational inference in humans and reinforcement learning models
COSYNE 2023
Expectation management in humans and LLMs
COSYNE 2025
Humans can use positive and negative spectrotemporal correlations to detect rising and falling pitch
COSYNE 2025
Humans forage for reward in classic reinforcement learning tasks
COSYNE 2025
Persistent decision-making in mice, monkeys, and humans
COSYNE 2025
Adaptation of rats and humans to a volatile hidden Markov model for reward collection
FENS Forum 2024
Airway manipulations of respiratory-modulated brain oscillations in humans
FENS Forum 2024
Astroglial connexins differentially drive chronic seizures in mice and humans
FENS Forum 2024
Behavioral and electrophysiological characteristics of real-world head movement during gaze shift in humans
FENS Forum 2024
Cognitive flexibility and anterior cingulate gray matter volumes correlate with serum levels of brevican in healthy humans
FENS Forum 2024
α5-containing nicotinic acetylcholine receptors are important modulators of aggressive and dominant-like behaviors in rodents and humans
FENS Forum 2024
Coupling between global grey matter and fourth ventricle fMRI signals links with brain clearance in humans
FENS Forum 2024
Distinct claustrum-cortex connections are involved in cognitive control performance and habitual sleep in humans
FENS Forum 2024
The effect of stimulus modality and stimulus complexity on associative equivalence learning in healthy humans
FENS Forum 2024
The influence of the time of the day on the coupling between global grey matter BOLD and CSF flow signal in healthy humans: Preliminary results
FENS Forum 2024
Lymphatic vessels accompanying the internal carotid and vertebral arteries in humans
FENS Forum 2024
The neural bases of how dogs and humans navigate their social environment
FENS Forum 2024
Neurochemical mechanisms underlying serotonergic modulation of neuroplasticity in humans
FENS Forum 2024
Neurochemistry and functional connectivity of the nucleus incertus–ventral hippocampal pathway: Possible involvement in anxiety control in rats and humans
FENS Forum 2024
Neuroplasticity and early cochlear implant use in adult-deafened humans and rats
FENS Forum 2024
Observation of social and non-social interactions in dogs and humans: Results from fMRI and eyetracking
FENS Forum 2024
Predicting memory performances in humans using cortically distributed sEEG signals
FENS Forum 2024
Self-produced bodily actions and rhythms modulate fear in humans
FENS Forum 2024
Static magnetic fields to treat refractory epilepsy in humans
FENS Forum 2024
Tactile perception and memory in rats and humans: A general framework across tasks and species
FENS Forum 2024