Human
human
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
A human stem cell-derived organoid model of the trigeminal ganglion
SISSA cognitive neuroscience PhD
Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for the Fall admission round are those funded by the "Fondo Sociale Europeo Plus", accessible through the first link above. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
Anne Urai
Full listing: https://www.medewerkers.universiteitleiden.nl/vacatures/2022/kwartaal-2/22-25911465postdoc-in-cognitive-and-computational-neuroscience The way that neural computations give rise to behavior is shaped by ever-fluctuating internal states. These states (such as arousal, fear, stress, hunger, motivation, engagement, or drowsiness) are characterized by spontaneous neural dynamics that arise independent of task demands. Across subfields of neuroscience, internal states have been quantified using a variety of measurements and markers (based on physiology, brain activity or behavioral motifs), but these are rarely explicitly compared or integrated. It is thus unclear if such different state markers quantify the same, or even related underlying processes. Instead, the simplified concept of internal states likely obscures a multi-dimensional set of biologically relevant processes, which may affect behavior in distinct ways. In this project, we will take an integrative approach to quantify the structure and dimensionality of internal states and their effects on decision-making behavior. We will apply several state-of-the-art methods to extract different markers of internal states from facial video data, pupillometry, and high-density neural recordings. We will then quantify the unique and shared dimensionality of internal states, and their relevance for predicting choice behavior. By combining existing, publicly available datasets in mice with additional experiments in humans, we will directly test the cross-species relevance of our findings. Lastly, we will investigate how internal states change over a range of timescales: from sub-second fluctuations relevant for choice behavior to the very slow changes that take place with aging. This project is a collaboration between the Cognitive, Computational and Systems Neuroscience lab led by Dr. Anne Urai (daily supervisor) and the Temporal Attention Lab led by Prof. Sander Nieuwenhuis. We are based in Leiden University’s Cognitive Psychology Unit, and we participate in the Leiden Institute for Brain and Cognition (LIBC), an interfaculty center for interdisciplinary research on brain and cognition ( https://www.libc-leiden.nl ). There are further options for collaborating with the International Brain Laboratory ( https://www.internationalbrainlab.com ). Leiden is a small, friendly town near the beach, with great public transport connections to larger cities nearby. The Netherlands has excellent support for families. The working language at the university is English, and you can comfortably get by with only minimal knowledge of Dutch. Our team is small, and we value a collegial and supportive environment. Open science is a core value in our work, and we actively pursue ways to make academia a better place. We support postdocs in developing their own ideas and research line, and we offer opportunities to gain small-scale teaching and grant writing experience. More information on our groups’ research interests, scientific vision and working environment can be found at https://anneurai.net, https://anne-urai.github.io/lab_wiki/Vision.html and https://www.temporalattentionlab.com If you like asking hard questions, making things work, and pursuing creative ideas in a collaborative team, then this position may be for you. Please do not be discouraged from applying if your current CV is not a ‘perfect fit’. This job could suit someone from a range of different career backgrounds, and there is great scope for the right applicant to develop the role and make it their own.
Anne Urai
The way that neural computations give rise to behavior is shaped by ever-fluctuating internal states. These states (such as arousal, fear, stress, hunger, motivation, engagement, or drowsiness) are characterized by spontaneous neural dynamics that arise independent of task demands. Across subfields of neuroscience, internal states have been quantified using a variety of measurements and markers (based on physiology, brain activity or behavioral motifs), but these are rarely explicitly compared or integrated. It is thus unclear if such different state markers quantify the same, or even related underlying processes. Instead, the simplified concept of internal states likely obscures a multi-dimensional set of biologically relevant processes, which may affect behavior in distinct ways. In this project, we will take an integrative approach to quantify the structure and dimensionality of internal states and their effects on decision-making behavior. We will apply several state-of-the-art methods to extract different markers of internal states from facial video data, pupillometry, and high-density neural recordings. We will then quantify the unique and shared dimensionality of internal states, and their relevance for predicting choice behavior. By combining existing, publicly available datasets in mice with additional experiments in humans, we will directly test the cross-species relevance of our findings. Lastly, we will investigate how internal states change over a range of timescales: from sub-second fluctuations relevant for choice behavior to the very slow changes that take place with aging. This project is a collaboration between the Cognitive, Computational and Systems Neuroscience lab led by Dr. Anne Urai (daily supervisor) and the Temporal Attention Lab led by Prof. Sander Nieuwenhuis. We are based in Leiden University’s Cognitive Psychology Unit, and we participate in the Leiden Institute for Brain and Cognition (LIBC), an interfaculty center for interdisciplinary research on brain and cognition ( https://www.libc-leiden.nl ). There are further options for collaborating with the International Brain Laboratory ( https://www.internationalbrainlab.com ). Leiden is a small, friendly town near the beach, with great public transport connections to larger cities nearby. The Netherlands has excellent support for families. The working language at the university is English, and you can comfortably get by with only minimal knowledge of Dutch. Our team is small, and we value a collegial and supportive environment. Open science is a core value in our work, and we actively pursue ways to make academia a better place. We support postdocs in developing their own ideas and research line, and we offer opportunities to gain small-scale teaching and grant writing experience. More information on our groups’ research interests, scientific vision and working environment can be found at https://anneurai.net, https://anne-urai.github.io/lab_wiki/Vision.html and https://www.temporalattentionlab.com If you like asking hard questions, making things work, and pursuing creative ideas in a collaborative team, then this position may be for you. Please do not be discouraged from applying if your current CV is not a ‘perfect fit’. This job could suit someone from a range of different career backgrounds, and there is great scope for the right applicant to develop the role and make it their own. See the full listing and apply at: https://www.medewerkers.universiteitleiden.nl/vacatures/2022/kwartaal-2/22-25911465postdoc-in-cognitive-and-computational-neuroscience
Dr. Jorge Mejias
The Computational Neuroscience Lab, recently established within the Cognitive and Systems Neuroscience Group at the University of Amsterdam (UvA), is seeking a highly qualified and motivated candidate for a postdoctoral position in computational neuroscience, under the project 'Translational biomarkers for compulsivity across large-scale brain networks'. The aim of this project is to understand the neurobiological roots of compulsivity, by identifying the neural signatures of compulsive behavior in cortical and subcortical brain regions. A combination of experimental and computational work will be used, with the presently advertised position being associated with the computational modeling part. You will develop and analyze computational models of large-scale brain networks of rodents and humans, following previous work in macaques (Mejias et al., Science Advances 2016). These new models will explicitly replicate neural dynamics underlying compulsive behavior, and will be constrained by existing anatomical, electrophysiological and clinical data from the experimental partners of the project. You will be supervised by Dr. Jorge Mejias, head of the Computational Neuroscience Lab, and the work will be carried out in close collaboration with Drs. Ingo Willuhn and Tara Arbab, from the Netherlands Institute for Neuroscience. You will also closely collaborate with other computational neuroscientists, experimental neuroscientists, clinicians, theoreticians, and machine learning experts at the UvA. You are expected: -to perform research on computational neuroscience;-to review relevant literature and acquire knowledge on neurobiology, compulsivity and computational neuroscience; -to build biologically realistic multi-area computational models of cortical circuits, and compare their predictions with experimental findings; -to collaborate and discuss regularly with other researchers in the project; -to take part in teaching efforts of the Computational Neuroscience Lab, including supervision of bachelor and Master students; -to write scientific manuscripts and present your results at meetings and conferences. Our offer: A temporary contract for 38 hours a week, preferably starting on 1 November 2021. The duration of the contract is 18 months (with a two months probation period). An extension of the contract is possible provided a positive performance of the candidate and further availability of funds. The salary, depending on relevant work experience before the beginning of the employment contract, will be €2,836 to €4,474 (scale 10) gross per month, based on a full-time contract (38 hours a week). This is exclusive 8% holiday allowance and 8.3% end-of-year bonus. A favorable tax agreement, the ‘30% ruling’, may apply to non-Dutch applicants. The Collective Labor Agreement of Dutch Universities is applicable.
Prof. Pierre Mégevand
The Human Neuron Lab (@LabNeuron), led by Prof. Pierre Mégevand, is dedicated to advancing the detection and prediction of epileptic seizures. The lab also investigates the neuronal basis of human cognitive brain functions. For that purpose, the lab focuses on invasive neurophysiology in the human brain, including ECoG and stereo-EEG. Additionally, unique microelectrode recordings (using Utah arrays and microwire electrodes) give access to the activity of dozens of single neurons in the patient's brain in order to reveal novel markers of epileptic seizures at the neuronal population level. The lab is equipped with state-of-the-art technology for human invasive neurophysiology. It benefits from the powerful computing infrastructure of the University. Importantly, the lab is fully integrated with the epilepsy monitoring unit of Geneva University Hospitals, and thus boasts exceptional access to patients and recordings. This project focuses on defining novel markers of seizures in patients who suffer from epilepsy. Continuous intracranial EEG and microelectrode recordings will be acquired for several weeks. Single-unit activity will be tracked over time for multiple neurons. Activity within the neuronal population will be examined for the presence of patterns that are specific to the patient’s seizures. The performance of seizure detection and prediction using microelectrode recordings will be compared to existing algorithms based on intracranial EEG data. Research tasks: - Acquire, analyze, and curate a uniquely rich dataset of human intracranial EEG and microelectrode recordings - Build a pipeline for semi-automated single-neuron identification and tracking - Establish novel markers of neuronal population activity that identify seizures - Participate in the mapping of sensory, motor and language functions in epilepsy patients - Daily interactions with the patients and staff of the epilepsy monitoring unit Work environment: The University of Geneva is a prestigious research hub in neuroscience, federating many labs that cover the full spectrum from basic to cognitive, translational and clinical research. The neuroscience community in Geneva is also strengthened by rich collaborations with other research institutions, including Campus Biotech, the Wyss Center, and the EPFL. This project is fully funded by a grant from the Swiss National Science Foundation. The PhD and post-doc positions are open for up to 4 years each. Swiss salaries are very attractive in international comparison. The positions will open from May 2021 onwards. Please send your application, including a letter of intent, curriculum vitae, list of publications, and at least two references, by e-mail to: Prof. Pierre Mégevand Division of neurology, Geneva University Hospitals Rue Gabrielle-Perret-Gentil 4, 1205 Geneva, Switzerland pierre.megevand@unige.ch
Biomolecular condensates as drivers of neuroinflammation
Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism
Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions
Cellular Crosstalk in Brain Development, Evolution and Disease
Cellular crosstalk is an essential process during brain development and is influenced by numerous factors, including cell morphology, adhesion, the local extracellular matrix and secreted vesicles. Inspired by mutations associated with neurodevelopmental disorders, we focus on understanding the role of extracellular mechanisms essential for the proper development of the human brain. Therefore, we combine 2D and 3D in vitro human models to better understand the molecular and cellular mechanisms involved in progenitor proliferation and fate, migration and maturation of excitatory and inhibitory neurons during human brain development and tackle the causes of neurodevelopmental disorders.
How the presynapse forms and functions”
Nervous system function relies on the polarized architecture of neurons, established by directional transport of pre- and postsynaptic cargoes. While delivery of postsynaptic components depends on the secretory pathway, the identity of the membrane compartment(s) that supply presynaptic active zone (AZ) and synaptic vesicle (SV) proteins is largely unknown. I will discuss our recent advances in our understanding of how key components of the presynaptic machinery for neurotransmitter release are transported and assembled focussing on our studies in genome-engineered human induced pluripotent stem cell-derived neurons. Specifically, I will focus on the composition and cell biological identity of the axonal transport vesicles that shuttle key components of neurotransmission to nascent synapses and on machinery for axonal transport and its control by signaling lipids. Our studies identify a crucial mechanism mediating the delivery of SV and active zone proteins to developing synapses and reveal connections to neurological disorders. In the second part of my talk, I will discuss how exocytosis and endocytosis are coupled to maintain presynaptic membrane homeostasis. I will present unpublished data regarding the role of membrane tension in the coupling of exocytosis and endocytosis at synapses. We have identified an endocytic BAR domain protein that is capable of sensing alterations in membrane tension caused by the exocytotic fusion of SVs to initiate compensatory endocytosis to restore plasma membrane area. Interference with this mechanism results in defects in the coupling of presynaptic exocytosis and SV recycling at human synapses.
Non-invasive human neuroimaging studies of motor plasticity have predominantly focused on the cerebral cortex due to low signal-to-noise ration of blood oxygen level-dependent (BOLD) signals in subcortical structures and the small effect sizes typically observed in plasticity paradigms. Precision functional mapping can help overcome these challenges and has revealed significant and reversible functional alterations in the cortico-subcortical motor circuit during arm immobilization
Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives
Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.
Representational drift in human visual cortex
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Functional Imaging of the Human Brain: A Window into the Organization of the Human Mind
Neural circuits underlying sleep structure and functions
Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Expanding mechanisms and therapeutic targets for neurodegenerative disease
A hallmark pathological feature of the neurodegenerative diseases amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is the depletion of RNA-binding protein TDP-43 from the nucleus of neurons in the brain and spinal cord. A major function of TDP-43 is as a repressor of cryptic exon inclusion during RNA splicing. By re-analyzing RNA-sequencing datasets from human FTD/ALS brains, we discovered dozens of novel cryptic splicing events in important neuronal genes. Single nucleotide polymorphisms in UNC13A are among the strongest hits associated with FTD and ALS in human genome-wide association studies, but how those variants increase risk for disease is unknown. We discovered that TDP-43 represses a cryptic exon-splicing event in UNC13A. Loss of TDP-43 from the nucleus in human brain, neuronal cell lines and motor neurons derived from induced pluripotent stem cells resulted in the inclusion of a cryptic exon in UNC13A mRNA and reduced UNC13A protein expression. The top variants associated with FTD or ALS risk in humans are located in the intron harboring the cryptic exon, and we show that they increase UNC13A cryptic exon splicing in the face of TDP-43 dysfunction. Together, our data provide a direct functional link between one of the strongest genetic risk factors for FTD and ALS (UNC13A genetic variants), and loss of TDP-43 function. Recent analyses have revealed even further changes in TDP-43 target genes, including widespread changes in alternative polyadenylation, impacting expression of disease-relevant genes (e.g., ELP1, NEFL, and TMEM106B) and providing evidence that alternative polyadenylation is a new facet of TDP-43 pathology.
Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation
Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Cognitive maps, navigational strategies, and the human brain
Harnessing Big Data in Neuroscience: From Mapping Brain Connectivity to Predicting Traumatic Brain Injury
Neuroscience is experiencing unprecedented growth in dataset size both within individual brains and across populations. Large-scale, multimodal datasets are transforming our understanding of brain structure and function, creating opportunities to address previously unexplored questions. However, managing this increasing data volume requires new training and technology approaches. Modern data technologies are reshaping neuroscience by enabling researchers to tackle complex questions within a Ph.D. or postdoctoral timeframe. I will discuss cloud-based platforms such as brainlife.io, that provide scalable, reproducible, and accessible computational infrastructure. Modern data technology can democratize neuroscience, accelerate discovery and foster scientific transparency and collaboration. Concrete examples will illustrate how these technologies can be applied to mapping brain connectivity, studying human learning and development, and developing predictive models for traumatic brain injury (TBI). By integrating cloud computing and scalable data-sharing frameworks, neuroscience can become more impactful, inclusive, and data-driven..
Rejuvenating the Alzheimer’s brain: Challenges & Opportunities
Unlocking the Secrets of Microglia in Neurodegenerative diseases: Mechanisms of resilience to AD pathologies
Human Fear and Memory: Insights and Treatments Using Mobile Implantable Neurotechnologies
The representation of speech conversations in the human auditory cortex
Resonancia Magnética y Detección Remota: No se Necesita Estar tan Cerca”
La resonancia magnética nuclear está basada en el fenómeno del magnetismo nuclear que más aplicaciones ha encontrado para el estudio de enfermedades humanas. Usualmente la señal de RM es recibida y transmitida a distancias cercanas al objeto del que se quiere obtener una imagen. Otra alternativa es emitir y recibir la misma señal de manera remota haciendo uso de guías de onda. Este enfoque tiene la ventaja que se puede aplicar a altos campos magnéticos, la absorción de energía es menor, además es posible cubrir mayores regiones de interés y comodidad para el paciente. Por otro lado, sufre de baja calidad de imagen en algunos casos. En esta ocasión hablaremos de nuestra experiencia haciendo uso de este enfoque empleando una guía de ondas abierta y metamateriales tanto para sistemas clínicos como preclínicos de IRM.
The speed of prioritizing information for consciousness: A robust and mysterious human trait
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Constructing and deconstructing the human nervous system to study development and disease
Pharmacological exploitation of neurotrophins and their receptors to develop novel therapeutic approaches against neurodegenerative diseases and brain trauma
Neurotrophins (NGF, BDNF, NT-3) are endogenous growth factors that exert neuroprotective effects by preventing neuronal death and promoting neurogenesis. They act by binding to their respective high-affinity, pro-survival receptors TrkA, TrkB or TrkC, as well as to p75NTR death receptor. While these molecules have been shown to significantly slow or prevent neurodegeneration, their reduced bioavailability and inability to penetrate the blood-brain-barrier limit their use as potential therapeutics. To bypass these limitations, our research team has developed and patented small-sized, lipophilic compounds which selectively resemble neurotrophins’ effects, presenting preferable pharmacological properties and promoting neuroprotection and repair against neurodegeneration. In addition, the combination of these molecules with 3D cultured human neuronal cells, and their targeted delivery in the brain ventricles through soft robotic systems, could offer novel therapeutic approaches against neurodegenerative diseases and brain trauma.
Oligodendrocyte dyfunction drives human cognitive decline
Learning Representations of Complex Meaning in the Human Brain
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Neurobiological Pathways to Tau-dependent Pathology: Perspectives from flies to humans
Exploration and Exploitation in Human Joint Decisions
Gene regulatory mechanisms of neocortex development and evolution
The neocortex is considered to be the seat of higher cognitive functions in humans. During its evolution, most notably in humans, the neocortex has undergone considerable expansion, which is reflected by an increase in the number of neurons. Neocortical neurons are generated during development by neural stem and progenitor cells. Epigenetic mechanisms play a pivotal role in orchestrating the behaviour of stem cells during development. We are interested in the mechanisms that regulate gene expression in neural stem cells, which have implications for our understanding of neocortex development and evolution, neural stem cell regulation and neurodevelopmental disorders.
Genetic and epigenetic underpinnings of neurodegenerative disorders
Pluripotent cells, including embryonic stem (ES) and induced pluripotent stem (iPS) cells, are used to investigate the genetic and epigenetic underpinnings of human diseases such as Parkinson’s, Alzheimer’s, autism, and cancer. Mechanisms of somatic cell reprogramming to an embryonic pluripotent state are explored, utilizing patient-specific pluripotent cells to model and analyze neurodegenerative diseases.
Continuous guidance of human goal-directed movements
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Dynamic neurochemistry in conscious humans during stereoEEG monitoring
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
Personalized medicine and predictive health and wellness: Adding the chemical component
Wearable sensors that detect and quantify biomarkers in retrievable biofluids (e.g., interstitial fluid, sweat, tears) provide information on human dynamic physiological and psychological states. This information can transform health and wellness by providing actionable feedback. Due to outdated and insufficiently sensitive technologies, current on-body sensing systems have capabilities limited to pH, and a few high-concentration electrolytes, metabolites, and nutrients. As such, wearable sensing systems cannot detect key low-concentration biomarkers indicative of stress, inflammation, metabolic, and reproductive status. We are revolutionizing sensing. Our electronic biosensors detect virtually any signaling molecule or metabolite at ultra-low levels. We have monitored serotonin, dopamine, cortisol, phenylalanine, estradiol, progesterone, and glucose in blood, sweat, interstitial fluid, and tears. The sensors are based on modern nanoscale semiconductor transistors that are straightforwardly scalable for manufacturing. We are developing sensors for >40 biomarkers for personalized continuous monitoring (e.g., smartwatch, wearable patch) that will provide feedback for treating chronic health conditions (e.g., perimenopause, stress disorders, phenylketonuria). Moreover, our sensors will enable female fertility monitoring and the adoption of more healthy lifestyles to prevent disease and improve physical and cognitive performance.
Reactivation in the human brain connects the past with the present
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Spatial Organization of Cellular Reactive States in Human Brain Cancer
Applied cognitive neuroscience to improve learning and therapeutics
Advancements in cognitive neuroscience have provided profound insights into the workings of the human brain and the methods used offer opportunities to enhance performance, cognition, and mental health. Drawing upon interdisciplinary collaborations in the University of California San Diego, Human Performance Optimization Lab, this talk explores the application of cognitive neuroscience principles in three domains to improve human performance and alleviate mental health challenges. The first section will discuss studies addressing the role of vision and oculomotor function in athletic performance and the potential to train these foundational abilities to improve performance and sports outcomes. The second domain considers the use of electrophysiological measurements of the brain and heart to detect, and possibly predict, errors in manual performance, as shown in a series of studies with surgeons as they perform robot-assisted surgery. Lastly, findings from clinical trials testing personalized interventional treatments for mood disorders will be discussed in which the temporal and spatial parameters of transcranial magnetic stimulation (TMS) are individualized to test if personalization improves treatment response and can be used as predictive biomarkers to guide treatment selection. Together, these translational studies use the measurement tools and constructs of cognitive neuroscience to improve human performance and well-being.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Use of human systems for neuroinflammatory/neurodegenerative diseases
Modeling human brain development and disease: the role of primary cilia
Neurodevelopmental disorders (NDDs) impose a global burden, affecting an increasing number of individuals. While some causative genes have been identified, understanding the human-specific mechanisms involved in these disorders remains limited. Traditional gene-driven approaches for modeling brain diseases have failed to capture the diverse and convergent mechanisms at play. Centrosomes and cilia act as intermediaries between environmental and intrinsic signals, regulating cellular behavior. Mutations or dosage variations disrupting their function have been linked to brain formation deficits, highlighting their importance, yet their precise contributions remain largely unknown. Hence, we aim to investigate whether the centrosome/cilia axis is crucial for brain development and serves as a hub for human-specific mechanisms disrupted in NDDs. Towards this direction, we first demonstrated species-specific and cell-type-specific differences in the cilia-genes expression during mouse and human corticogenesis. Then, to dissect their role, we provoked their ectopic overexpression or silencing in the developing mouse cortex or in human brain organoids. Our findings suggest that cilia genes manipulation alters both the numbers and the position of NPCs and neurons in the developing cortex. Interestingly, primary cilium morphology is disrupted, as we find changes in their length, orientation and number that lead to disruption of the apical belt and altered delamination profiles during development. Our results give insight into the role of primary cilia in human cortical development and address fundamental questions regarding the diversity and convergence of gene function in development and disease manifestation. It has the potential to uncover novel pharmacological targets, facilitate personalized medicine, and improve the lives of individuals affected by NDDs through targeted cilia-based therapies.
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
This decision matters: Sorting out the variables that lead to a single choice
Contrasting developmental principles of human brain development and their relevance to neurodevelopmental disorders
Mitochondrial diversity in the mouse and human brain
The basis of the mind, of mental states, and complex behaviors is the flow of energy through microscopic and macroscopic brain structures. Energy flow through brain circuits is powered by thousands of mitochondria populating the inside of every neuron, glial, and other nucleated cell across the brain-body unit. This seminar will cover emerging approaches to study the mind-mitochondria connection and present early attempts to map the distribution and diversity of mitochondria across brain tissue. In rodents, I will present convergent multimodal evidence anchored in enzyme activities, gene expression, and animal behavior that distinct behaviorally-relevant mitochondrial phenotypes exist across large-scale mouse brain networks. Extending these findings to the human brain, I will present a developing systematic biochemical and molecular map of mitochondrial variation across cortical and subcortical brain structures, representing a foundation to understand the origin of complex energy patterns that give rise to the human mind.
Preserving microbial diversity as a keystone of human and planetary health
Activity-Dependent Gene Regulation in Health and Disease
In the last of this year’s Brain Prize webinars, Elizabeth Pollina (Washington University, USA), Eric Nestler (Icahn School of Medicine Mount Sinai, USA) and Michelle Monje (Stanford University, USA) will present their work on activity-dependent gene regulation in health and disease. Each speaker will present for 25 minutes, and the webinar will conclude with an open discussion. The webinar will be moderated by the winners of the 2023 Brain Prize, Michael Greenberg, Erin Schuman and Christine Holt.
Causal role of human frontopolar cortex in information integration during complex decision making
Bernstein Conference 2024
Human-like Behavior and Neural Representations Emerge in a Goal-driven Model of Overt Visual Search for Natural Objects
Bernstein Conference 2024
Identifying patterns across brains from 10 years of human single-neuron recordings
Bernstein Conference 2024
Human iPSC-derived cell grafts promote functional recovery by molecular interaction with stroke-injured brain
FENS Forum 2024
Joint-Horizon: Random and Directed Exploration in Human Dyads
Bernstein Conference 2024
Modularity of the human connectome enables dual attentional modes by frustrating synchronization
Bernstein Conference 2024
3D Movement Analysis of the Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories
Bernstein Conference 2024
Non-Human Recognition of Orthography: How is it implemented and how does it differ from Human orthographic processing
Bernstein Conference 2024
The role of gamma oscillations in stimulus encoding during a sequential memory task in the human Medial Temporal Lobe
Bernstein Conference 2024
Balancing safety and efficiency in human decision-making.
COSYNE 2022
Behavioural probing of learned statistical structure in humans
COSYNE 2022
Deliberation gated by opportunity cost adapts to context with urgency in non-human primates
COSYNE 2022
A hierarchical representation of sequences in human entorhinal cortex
COSYNE 2022
A hierarchical representation of sequences in human entorhinal cortex
COSYNE 2022
Identifying the control strategies of monkeys and humans in a virtual balancing task
COSYNE 2022
Identifying the control strategies of monkeys and humans in a virtual balancing task
COSYNE 2022
Insight moments in neural networks and humans
COSYNE 2022
Insight moments in neural networks and humans
COSYNE 2022
Integrating information and reward into subjective value: humans, monkeys, and the lateral habenula
COSYNE 2022
The interplay between prediction and integration processes in human perception
COSYNE 2022
Integrating information and reward into subjective value: humans, monkeys, and the lateral habenula
COSYNE 2022
The interplay between prediction and integration processes in human perception
COSYNE 2022
Long-term consequences of actions affect human exploration in structured environments
COSYNE 2022
Long-term consequences of actions affect human exploration in structured environments
COSYNE 2022
Multi-task representations across human cortex transform along a sensory-to-motor hierarchy
COSYNE 2022
Multi-task representations across human cortex transform along a sensory-to-motor hierarchy
COSYNE 2022
Near-optimal time investments under uncertainty in humans, rats, and mice
COSYNE 2022
Near-optimal time investments under uncertainty in humans, rats, and mice
COSYNE 2022
Neural Representation of Hand Gestures in Human Premotor Cortex
COSYNE 2022
Neural Representation of Hand Gestures in Human Premotor Cortex
COSYNE 2022
Occam’s razor guides intuitive human inference
COSYNE 2022
Occam’s razor guides intuitive human inference
COSYNE 2022
Oscillatory and fractal biomarkers of human memory
COSYNE 2022
Oscillatory and fractal biomarkers of human memory
COSYNE 2022
Thalamic role in human cognitive flexibility and routing of abstract information.
COSYNE 2022
Thalamic role in human cognitive flexibility and routing of abstract information.
COSYNE 2022
The timescale and magnitude of 1/f aperiodic activity decrease with cortical depth in humans, macaques, and mice
COSYNE 2022
Tracking human skill learning with a hierarchical Bayesian sequence model
COSYNE 2022
The timescale and magnitude of 1/f aperiodic activity decrease with cortical depth in humans, macaques, and mice
COSYNE 2022
Adaptive probabilistic regression for real-time motor excitability state prediction from human EEG
Bernstein Conference 2024