Computational Mechanisms
computational mechanisms
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Prosocial Learning and Motivation across the Lifespan
2024 BACN Early-Career Prize Lecture Many of our decisions affect other people. Our choices can decelerate climate change, stop the spread of infectious diseases, and directly help or harm others. Prosocial behaviours – decisions that help others – could contribute to reducing the impact of these challenges, yet their computational and neural mechanisms remain poorly understood. I will present recent work that examines prosocial motivation, how willing we are to incur costs to help others, prosocial learning, how we learn from the outcomes of our choices when they affect other people, and prosocial preferences, our self-reports of helping others. Throughout the talk, I will outline the possible computational and neural bases of these behaviours, and how they may differ from young adulthood to old age.
Social and non-social learning: Common, or specialised, mechanisms? (BACN Early Career Prize Lecture 2022)
The last decade has seen a burgeoning interest in studying the neural and computational mechanisms that underpin social learning (learning from others). Many findings support the view that learning from other people is underpinned by the same, ‘domain-general’, mechanisms underpinning learning from non-social stimuli. Despite this, the idea that humans possess social-specific learning mechanisms - adaptive specializations moulded by natural selection to cope with the pressures of group living - persists. In this talk I explore the persistence of this idea. First, I present dissociations between social and non-social learning - patterns of data which are difficult to explain under the domain-general thesis and which therefore support the idea that we have evolved special mechanisms for social learning. Subsequently, I argue that most studies that have dissociated social and non-social learning have employed paradigms in which social information comprises a secondary, additional, source of information that can be used to supplement learning from non-social stimuli. Thus, in most extant paradigms, social and non-social learning differ both in terms of social nature (social or non-social) and status (primary or secondary). I conclude that status is an important driver of apparent differences between social and non-social learning. When we account for differences in status, we see that social and non-social learning share common (dopamine-mediated) mechanisms.
The role of sub-population structure in computations through neural dynamics
Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.
Extracting computational mechanisms from neural data using low-rank RNNs
An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.
The role of population structure in computations through neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
A predictive-processing account of psychosis
There has been increasing interest in the neurocomputational mechanisms underlying psychotic disorders in recent years. One promising approach is based on the theoretical framework of predictive processing, which proposes that inferences regarding the state of the world are made by combining prior beliefs with sensory signals. Delusions and hallucinations are the core symptoms of psychosis and often co-occur. Yet, different predictive-processing alterations have been proposed for these two symptom dimensions, according to which the relative weighting of prior beliefs in perceptual inference is decreased or increased, respectively. I will present recent behavioural, neuroimaging, and computational work that investigated perceptual decision-making under uncertainty and ambiguity to elucidate the changes in predictive processing that may give rise to psychotic experiences. Based on the empirical findings presented, I will provide a more nuanced predictive-processing account that suggests a common mechanism for delusions and hallucinations at low levels of the predictive-processing hierarchy, but still has the potential to reconcile apparently contradictory findings in the literature. This account may help to understand the heterogeneity of psychotic phenomenology and explain changes in symptomatology over time.
Peripersonal space (PPS) as a primary interface for self-environment interactions
Peripersonal space (PPS) defines the portion of space where interactions between our body and the external environment more likely occur. There is no physical boundary defining the PPS with respect to the extrapersonal space, but PPS is continuously constructed by a dedicated neural system integrating external stimuli and tactile stimuli on the body, as a function of their potential interaction. This mechanism represents a primary interface between the individual and the environment. In this talk, I will present most recent evidence and highlight the current debate about the neural and computational mechanisms of PPS, its main functions and properties. I will discuss novel data showing how PPS dynamically shapes to optimize body-environment interactions. I will describe a novel electrophysiological paradigm to study and measure PPS, and show how this has been used to search for a basic marker of potentials of self-environment interaction in newborns and patients with disorders of consciousness. Finally, I will discuss how PPS is also involved in, and in turn shaped by, social interactions. Under these acceptances, I will discuss how PPS plays a key role in self-consciousness.
Can I be bothered? Neural and computational mechanisms underlying the dynamics of effort processing (BACN Early-career Prize Lecture 2021)
From a workout at the gym to helping a colleague with their work, everyday we make decisions about whether we are willing to exert effort to obtain some sort of benefit. Increases in how effortful actions and cognitive processes are perceived to be has been linked to clinically severe impairments to motivation, such as apathy and fatigue, across many neurological and psychiatric conditions. However, the vast majority of neuroscience research has focused on understanding the benefits for acting, the rewards, and not on the effort required. As a result, the computational and neural mechanisms underlying how effort is processed are poorly understood. How do we compute how effortful we perceive a task to be? How does this feed into our motivation and decisions of whether to act? How are such computations implemented in the brain? and how do they change in different environments? I will present a series of studies examining these questions using novel behavioural tasks, computational modelling, fMRI, pharmacological manipulations, and testing in a range of different populations. These studies highlight how the brain represents the costs of exerting effort, and the dynamic processes underlying how our sensitivity to effort changes as a function of our goals, traits, and socio-cognitive processes. This work provides new computational frameworks for understanding and examining impaired motivation across psychiatric and neurological conditions, as well as why all of us, sometimes, can’t be bothered.
NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain
Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.
Neurocomputational mechanisms underlying developmental psychiatric disorders
Dorothy J Killam Lecture: Cell Type Classification and Circuit Mapping in the Mouse Brain
To understand the function of the brain and how its dysfunction leads to brain diseases, it is essential to have a deep understanding of the cell type composition of the brain, how the cell types are connected with each other and what their roles are in circuit function. At the Allen Institute, we have built multiple platforms, including single-cell transcriptomics, single and multi-patching electrophysiology, 3D reconstruction of neuronal morphology, high throughput brain-wide connectivity mapping, and large-scale neuronal activity imaging, to characterize the transcriptomic, physiological, morphological, and connectional properties of different types of neurons in a standardized way, towards a taxonomy of cell types and a description of their wiring diagram for the mouse brain, with a focus on the visual cortico-thalamic system. Building such knowledge base lays the foundation towards the understanding of the computational mechanisms of brain circuit function.
Towards better interoceptive biomarkers in computational psychiatry
Empirical evidence and theoretical models both increasingly emphasize the importance of interoceptive processing in mental health. Indeed, many mood and psychiatric disorders involve disturbed feelings and/or beliefs about the visceral body. However, current methods to measure interoceptive ability are limited in a number of ways, restricting the utility and interpretation of interoceptive biomarkers in psychiatry. I will present some newly developed measures and models which aim to improve our understanding of disordered brain-body interaction in psychiatric illnesses.
Computational mechanisms of odor perception and representational drift in rodent olfactory systems
Bernstein Conference 2024
Computational mechanisms underlying thalamic regulation of prefrontal signal-to-noise ratio in decision making
COSYNE 2023
Computational mechanisms underlying latent inverse value updating of unchosen actions
Neuromatch 5