Tes
TES
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Prof. Amir Raz
We seek individuals proficient with the development and testing of novel transcranial magnetic stimulation (TMS) methods to evaluate research questions related to free will, consciousness, sense of agency, and higher brain functions.
Prof. Shu-Chen Li
The Chair of Lifespan Developmental Neuroscience investigates neurocognitive mechanisms underlying perceptual, cognitive, and motivational development across the lifespan. The main themes of our research are neurofunctional mechanisms underlying lifespan development of episodic and spatial memory, cognitive control, reward processing, decision making, perception and action. We also pursue applied research to study effects of behavioral intervention, non-invasive brain stimulation, or digital technologies in enhancing functional plasticity for individuals of difference ages. We utilize a broad range of neurocognitive (e.g., EEG, fNIRs, fMRI, tDCS) and computational methods. The here announced position is embedded in a newly established research group funded by the DFG (FOR5429), with a focus on modulating brain networks for memory and learning by using focalized transcranial electrical stimulation (tES). The subproject with which this position is associated will study effects of focalized tES on value-based sequential learning at the behavioral and brain levels in adults. The data collection for this subproject will mainly be carried out at the Berlin site (Center for Cognitive Neuroscience, FU Berlin).
Prefrontal-thalamic goal-state coding segregates navigation episodes into spatially consistent parallel hippocampal maps
Biomolecular condensates as drivers of neuroinflammation
Astrocytes: From Metabolism to Cognition
Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.
Scaling Up Bioimaging with Microfluidic Chips
Explore how microfluidic chips can enhance your imaging experiments by increasing control, throughput, or flexibility. In this remote, personalized workshop, participants will receive expert guidance, support and chips to run tests on their own microscopes.
OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis
In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.
Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives
Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.
FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images
Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
Neural control of internal affective states”
Neural circuits underlying sleep structure and functions
Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.
Developmental and evolutionary perspectives on thalamic function
Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Astrocytes release glutamate by regulated exocytosis in health and disease
Astrocytes release glutamate by regulated exocytosis in health and disease Vladimir Parpura, International Translational Neuroscience Research Institute, Zhejiang Chinese Medical University, Hangzhou, P.R. China Parpura will present you with the evidence that astrocytes, a subtype of glial cells in the brain, can exocytotically release the neurotransmitter glutamate and how this release is regulated. Spatiotemporal characteristic of vesicular fusion that underlie glutamate release in astrocytes will be discussed. He will also present data on a translational project in which this release pathway can be targeted for the treatment of glioblastoma, the deadliest brain cancer.
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala. This study by Marios Abatis et al. demonstrates how fear conditioning strengthens synaptic connections between engram cells in the lateral amygdala, revealed through optogenetic identification of neuronal ensembles and electrophysiological measurements. The work provides crucial insights into memory formation mechanisms at the synaptic level, with implications for understanding anxiety disorders and developing targeted interventions. Presented by Dr. Kenneth Hayworth, this journal club will explore the paper's methodology linking engram cell reactivation with synaptic plasticity measurements, and discuss implications for memory decoding research.
Computational modelling of ocular pharmacokinetics
Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Decoding ketamine: Neurobiological mechanisms underlying its rapid antidepressant efficacy
Unlike traditional monoamine-based antidepressants that require weeks to exert effects, ketamine alleviates depression within hours, though its clinical use is limited by side effects. While ketamine was initially thought to work primarily through NMDA receptor (NMDAR) inhibition, our research reveals a more complex mechanism. We demonstrate that NMDAR inhibition alone cannot explain ketamine's sustained antidepressant effects, as other NMDAR antagonists like MK-801 lack similar efficacy. Instead, the (2R,6R)-hydroxynorketamine (HNK) metabolite appears critical, exhibiting antidepressant effects without ketamine's side effects. Paradoxically, our findings suggest an inverted U-shaped dose-response relationship where excessive NMDAR inhibition may actually impede antidepressant efficacy, while some level of NMDAR activation is necessary. The antidepressant actions of ketamine and (2R,6R)-HNK require AMPA receptor activation, leading to synaptic potentiation and upregulation of AMPA receptor subunits GluA1 and GluA2. Furthermore, NMDAR subunit GluN2A appears necessary and possibly sufficient for these effects. This research establishes NMDAR-GluN2A activation as a common downstream effector for rapid-acting antidepressants, regardless of their initial targets, offering promising directions for developing next-generation antidepressants with improved efficacy and reduced side effects.
Impact of High Fat Diet on Central Cardiac Circuits: When The Wanderer is Lost
Cardiac vagal motor drive originates in the brainstem's cardiac vagal motor neurons (CVNs). Despite well-established cardioinhibitory functions in health, our understanding of CVNs in disease is limited. There is a clear connection of cardiovascular regulation with metabolic and energy expenditure systems. Using high fat diet as a model, this talk will explore how metabolic dysfunction impacts the regulation of cardiac tissue through robust inhibition of CVNs. Specifically, it will present an often overlooked modality of inhibition, tonic gamma-aminobuytric acid (GABA) A-type neurotransmission using an array of techniques from single cell patch clamp electrophysiology to transgenic in vivo whole animal physiology. It also will highlight a unique interaction with the delta isoform of protein kinase C to facilitate GABA A-type receptor expression.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Regulation of cortical circuit maturation and plasticity by oligodendrocytes and myelin
A Novel Neurophysiological Approach to Assessing Distractibility within the General Population
Vulnerability to distraction varies across the general population and significantly affects one’s capacity to stay focused on and successfully complete the task at hand, whether at school, on the road, or at work. In this talk, I will begin by discussing how distractibility is typically assessed in the literature and introduce our innovative ERP approach to measuring it. Since distractibility is a cardinal symptom of ADHD, I will introduce its most widely used paper-and-pencil screening tool for the general population as external validation. Following that, I will present the Load Theory of Attention and explain how we used perceptual load to test the reliability of our neural marker of distractibility. Finally, I will highlight potential future applications of this marker in clinical and educational settings.
Digital Minds: Brain Development in the Age of Technology
Digital Minds: Brain Development in the Age of Technology examines how our increasingly connected world shapes mental and cognitive health. From screen time and social media to virtual interactions, this seminar delves into the latest research on how technology influences brain development, relationships, and emotional well-being. Join us to explore strategies for harnessing technology's benefits while mitigating its potential challenges, empowering you to thrive in a digital age.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Mouse Motor Cortex Circuits and Roles in Oromanual Behavior
I’m interested in structure-function relationships in neural circuits and behavior, with a focus on motor and somatosensory areas of the mouse’s cortex involved in controlling forelimb movements. In one line of investigation, we take a bottom-up, cellularly oriented approach and use optogenetics, electrophysiology, and related slice-based methods to dissect cell-type-specific circuits of corticospinal and other neurons in forelimb motor cortex. In another, we take a top-down ethologically oriented approach and analyze the kinematics and cortical correlates of “oromanual” dexterity as mice handle food. I'll discuss recent progress on both fronts.
Rethinking Attention: Dynamic Prioritization
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.
Mapping the neural dynamics of dominance and defeat
Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
How Generative AI is Revolutionizing the Software Developer Industry
Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.
Bernstein Conference 2024
Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange:contentReference[oaicite:8]{index=8}. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field:contentReference[oaicite:9]{index=9}:contentReference[oaicite:10]{index=10}.
How the brain barriers ensure CNSimmune privilege”
Britta Engelhard’s research is devoted to understanding thefunction of the different brain barriers in regulating CNS immunesurveillance and how their impaired function contributes toneuroinflammatory diseases such as Multiple Sclerosis (MS) orAlzheimer’s disease (AD). Her laboratory combines expertise invascular biology, neuroimmunology and live cell imaging and hasdeveloped sophisticated in vitro and in vivo approaches to studyimmune cell interactions with the brain barriers in health andneuroinflammation.
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
Influence of the context of administration in the antidepressant-like effects of the psychedelic 5-MeO-DMT
Psychedelics like psilocybin have shown rapid and long-lasting efficacy on depressive and anxiety symptoms. Other psychedelics with shorter half-lives, such as DMT and 5-MeO-DMT, have also shown promising preliminary outcomes in major depression, making them interesting candidates for clinical practice. Despite several promising clinical studies, the influence of the context on therapeutic responses or adverse effects remains poorly documented. To address this, we conducted preclinical studies evaluating the psychopharmacological profile of 5-MeO-DMT in contexts previously validated in mice as either pleasant (positive setting) or aversive (negative setting). Healthy C57BL/6J male mice received a single intraperitoneal (i.p.) injection of 5-MeO-DMT at doses of 0.5, 5, and 10 mg/kg, with assessments at 2 hours, 24 hours, and one week post-administration. In a corticosterone (CORT) mouse model of depression, 5-MeO-DMT was administered in different settings, and behavioral tests mimicking core symptoms of depression and anxiety were conducted. In CORT-exposed mice, an acute dose of 0.5 mg/kg administered in a neutral setting produced antidepressant-like effects at 24 hours, as observed by reduced immobility time in the Tail Suspension Test (TST). In a positive setting, the drug also reduced latency to first immobility and total immobility time in the TST. However, these beneficial effects were negated in a negative setting, where 5-MeO-DMT failed to produce antidepressant-like effects and instead elicited an anxiogenic response in the Elevated Plus Maze (EPM).Our results indicate a strong influence of setting on the psychopharmacological profile of 5-MeO-DMT. Future experiments will examine cortical markers of pre- and post-synaptic density to correlate neuroplasticity changes with the behavioral effects of 5-MeO-DMT in different settings.
Why age-related macular degeneration is a mathematically tractable disease
Among all prevalent diseases with a central neurodegeneration, AMD can be considered the most promising in terms of prevention and early intervention, due to several factors surrounding the neural geometry of the foveal singularity. • Steep gradients of cell density, deployed in a radially symmetric fashion, can be modeled with a difference of Gaussian curves. • These steep gradients give rise to huge, spatially aligned biologic effects, summarized as the Center of Cone Resilience, Surround of Rod Vulnerability. • Widely used clinical imaging technology provides cellular and subcellular level information. • Data are now available at all timelines: clinical, lifespan, evolutionary • Snapshots are available from tissues (histology, analytic chemistry, gene expression) • A viable biogenesis model exists for drusen, the largest population-level intraocular risk factor for progression. • The biogenesis model shares molecular commonality with atherosclerotic cardiovascular disease, for which there has been decades of public health success. • Animal and cell model systems are emerging to test these ideas.
Personalized medicine and predictive health and wellness: Adding the chemical component
Wearable sensors that detect and quantify biomarkers in retrievable biofluids (e.g., interstitial fluid, sweat, tears) provide information on human dynamic physiological and psychological states. This information can transform health and wellness by providing actionable feedback. Due to outdated and insufficiently sensitive technologies, current on-body sensing systems have capabilities limited to pH, and a few high-concentration electrolytes, metabolites, and nutrients. As such, wearable sensing systems cannot detect key low-concentration biomarkers indicative of stress, inflammation, metabolic, and reproductive status. We are revolutionizing sensing. Our electronic biosensors detect virtually any signaling molecule or metabolite at ultra-low levels. We have monitored serotonin, dopamine, cortisol, phenylalanine, estradiol, progesterone, and glucose in blood, sweat, interstitial fluid, and tears. The sensors are based on modern nanoscale semiconductor transistors that are straightforwardly scalable for manufacturing. We are developing sensors for >40 biomarkers for personalized continuous monitoring (e.g., smartwatch, wearable patch) that will provide feedback for treating chronic health conditions (e.g., perimenopause, stress disorders, phenylketonuria). Moreover, our sensors will enable female fertility monitoring and the adoption of more healthy lifestyles to prevent disease and improve physical and cognitive performance.
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
Metabolic-functional coupling of parvalbmunin-positive GABAergic interneurons in the injured and epileptic brain
Parvalbumin-positive GABAergic interneurons (PV-INs) provide inhibitory control of excitatory neuron activity, coordinate circuit function, and regulate behavior and cognition. PV-INs are uniquely susceptible to loss and dysfunction in traumatic brain injury (TBI) and epilepsy but the cause of this susceptibility is unknown. One hypothesis is that PV-INs use specialized metabolic systems to support their high-frequency action potential firing and that metabolic stress disrupts these systems, leading to their dysfunction and loss. Metabolism-based therapies can restore PV-IN function after injury in preclinical TBI models. Based on these findings, we hypothesize that (1) PV-INs are highly metabolically specialized, (2) these specializations are lost after TBI, and (3) restoring PV-IN metabolic specializations can improve PV-IN function as well as TBI-related outcomes. Using novel single-cell approaches, we can now quantify cell-type-specific metabolism in complex tissues to determine whether PV-IN metabolic dysfunction contributes to the pathophysiology of TBI.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection
I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.
Spatial Organization of Cellular Reactive States in Human Brain Cancer
Applied cognitive neuroscience to improve learning and therapeutics
Advancements in cognitive neuroscience have provided profound insights into the workings of the human brain and the methods used offer opportunities to enhance performance, cognition, and mental health. Drawing upon interdisciplinary collaborations in the University of California San Diego, Human Performance Optimization Lab, this talk explores the application of cognitive neuroscience principles in three domains to improve human performance and alleviate mental health challenges. The first section will discuss studies addressing the role of vision and oculomotor function in athletic performance and the potential to train these foundational abilities to improve performance and sports outcomes. The second domain considers the use of electrophysiological measurements of the brain and heart to detect, and possibly predict, errors in manual performance, as shown in a series of studies with surgeons as they perform robot-assisted surgery. Lastly, findings from clinical trials testing personalized interventional treatments for mood disorders will be discussed in which the temporal and spatial parameters of transcranial magnetic stimulation (TMS) are individualized to test if personalization improves treatment response and can be used as predictive biomarkers to guide treatment selection. Together, these translational studies use the measurement tools and constructs of cognitive neuroscience to improve human performance and well-being.
The multi-phase plasticity supporting winner effect
Aggression is an innate behavior across animal species. It is essential for competing for food, defending territory, securing mates, and protecting families and oneself. Since initiating an attack requires no explicit learning, the neural circuit underlying aggression is believed to be genetically and developmentally hardwired. Despite being innate, aggression is highly plastic. It is influenced by a wide variety of experiences, particularly winning and losing previous encounters. Numerous studies have shown that winning leads to an increased tendency to fight while losing leads to flight in future encounters. In the talk, I will present our recent findings regarding the neural mechanisms underlying the behavioral changes caused by winning.
The Role of Cognitive Appraisal in the Relationship between Personality and Emotional Reactivity
Emotion is defined as a rapid psychological process involving experiential, expressive and physiological responses. These emerge following an appraisal process that involves cognitive evaluations of the environment assessing its relevance, implication, coping potential, and normative significance. It has been suggested that changes in appraisal processes lead to changes in the resulting emotional nature. Simultaneously, it was demonstrated that personality can be seen as a predisposition to feel more frequently certain emotions, but the personality-appraisal-emotional response chain is rarely fully investigated. The present project thus sought to investigate the extent to which personality traits influence certain appraisals, which in turn influence the subsequent emotional reactions via a systematic analysis of the link between personality traits of different current models, specific appraisals, and emotional response patterns at the experiential, expressive, and physiological levels. Major results include the coherence of emotion components clustering, and the centrality of the pleasantness, coping potential and consequences appraisals, in context; and the differentiated mediating role of cognitive appraisal in the relation between personality and the intensity and duration of an emotional state, and autonomic arousal, such as Extraversion-pleasantness-experience, and Neuroticism-powerlessness-arousal. Elucidating these relationships deepens our understanding of individual differences in emotional reactivity and spot routes of action on appraisal processes to modify upcoming adverse emotional responses, with a broader societal impact on clinical and non-clinical populations.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Exploring Lifespan Memory Development and Intervention Strategies for Memory Decline through a Unified Model-Based Assessment
Understanding and potentially reversing memory decline necessitates a comprehensive examination of memory's evolution throughout life. Traditional memory assessments, however, suffer from a lack of comparability across different age groups due to the diverse nature of the tests employed. Addressing this gap, our study introduces a novel, ACT-R model-based memory assessment designed to provide a consistent metric for evaluating memory function across a lifespan, from 5 to 85-year-olds. This approach allows for direct comparison across various tasks and materials tailored to specific age groups. Our findings reveal a pronounced U-shaped trajectory of long-term memory function, with performance at age 5 mirroring those observed in elderly individuals with impairments, highlighting critical periods of memory development and decline. Leveraging this unified assessment method, we further investigate the therapeutic potential of rs-fMRI-guided TBS targeting area 8AV in individuals with early-onset Alzheimer’s Disease—a region implicated in memory deterioration and mood disturbances in this population. This research not only advances our understanding of memory's lifespan dynamics but also opens new avenues for targeted interventions in Alzheimer’s Disease, marking a significant step forward in the quest to mitigate memory decay.
Combined electrophysiological and optical recording of multi-scale neural circuit dynamics
This webinar will showcase new approaches for electrophysiological recordings using our silicon neural probes and surface arrays combined with diverse optical methods such as wide-field or 2-photon imaging, fiber photometry, and optogenetic perturbations in awake, behaving mice. Multi-modal recording of single units and local field potentials across cortex, hippocampus and thalamus alongside calcium activity via GCaMP6F in cortical neurons in triple-transgenic animals or in hippocampal astrocytes via viral transduction are brought to bear to reveal hitherto inaccessible and under-appreciated aspects of coordinated dynamics in the brain.
Evolution of convulsive therapy from electroconvulsive therapy to Magnetic Seizure Therapy; Interventional Neuropsychiatry
In April, we will host Nolan Williams and Mustafa Husain. Be prepared to embark on a journey from early brain stimulation with ECT to state-of-the art TMS protocols and magnetic seizure therapy! The talks will be held on Thursday, April 25th at noon ET / 6PM CET. Nolan Williams, MD, is an associate professor of Psychiatry and Behavioral Science at Stanford University. He developed the SAINT protocol, which is the first FDA-cleared non-invasive, rapid-acting neuromodulation treatment for treatment-resistant depression. Mustafa Husain, MD, is an adjunct professor of Psychiatry and Behavioral Sciences at Duke University and a professor of Psychiatry and Neurology at UT Southwestern Medical Center, Dallas. He will tell us about “Evolution of convulsive therapy from electroconvulsive therapy to Magnetic Seizure Therapy”. As always, we will also get a glimpse at the “Person behind the science”. Please register va talks.stimulatingbrains.org to receive the (free) Zoom link, subscribe to our newsletter, or follow us on Twitter/X for further updates!
Improving Language Understanding by Generative Pre Training
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
Bistability at the cellular level promotes robust and tunable criticality at the circuit level
Bernstein Conference 2024
Computing mutual-information rates by maximum-entropy-inspired models
Bernstein Conference 2024
Deep Brain Stimulation in the Globus Pallidus internus Promotes Habitual Behavior by Modulating Cortico-Thalamic Shortcuts and Basal Ganglia Plasticity
Bernstein Conference 2024
Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning
Bernstein Conference 2024
Spatial scale and coordinates of motion representation in the mouse Nucleus of the Optic Tract
Bernstein Conference 2024
Synaptic modulation facilitates adaptation in cortical networks
Bernstein Conference 2024
Uncovering neural circuit’s motifs and animal states using higher-order interactions
Bernstein Conference 2024
Basal Ganglia feedback loops as possible candidates for generation of beta oscillation
COSYNE 2022
Cellular mechanisms of dorsal horn neurons shape the functional states of nociceptive circuits
COSYNE 2022
How cerebellar architecture facilitates rapid online learning
COSYNE 2022
Computational strategies and neural correlates of probabilistic reversal learning in mice
COSYNE 2022
Clustered recurrent connectivity promotes the development of E/I co-tuning via synaptic plasticity
COSYNE 2022
Data-driven dynamical systems model of epilepsy development simulates intervention strategies
COSYNE 2022
Dentate gyrus inhibitory microcircuit promotes network mechanisms underlying memory consolidation
COSYNE 2022
Deliberation gated by opportunity cost adapts to context with urgency in non-human primates
COSYNE 2022
Distinct aversive states in the mouse medial prefrontal cortex.
COSYNE 2022
Distinct neural substrates for flexible and automatic motor sequence execution
COSYNE 2022
Diverse covariates modulate neural variability: a widespread (sub)cortical phenomenon
COSYNE 2022
Electrical but not optogenetic stimulation drives nonlinear contraction of neural states
COSYNE 2022
Environmental complexity modulates the arbitration between deliberative and habitual decision-making
COSYNE 2022
Facial movements and their neural correlates reveal latent decision variables in mice
COSYNE 2022
A hindbrain ring attractor network that integrates heading direction in the larval zebrafish
COSYNE 2022
A hindbrain ring attractor network that integrates heading direction in the larval zebrafish
COSYNE 2022
Identifying latent states in decision-making from cortical inactivation data
COSYNE 2022
Imagining what was there: looking at an absent offer location modulates neural responses in OFC
COSYNE 2022
Identifying latent states in decision-making from cortical inactivation data
COSYNE 2022
Imagining what was there: looking at an absent offer location modulates neural responses in OFC
COSYNE 2022
Integration of infant sensory cues and internal states for maternal motivated behaviors
COSYNE 2022
Integration of infant sensory cues and internal states for maternal motivated behaviors
COSYNE 2022
Isolated correlates of somatosensory perception in the posterior mouse cortex
COSYNE 2022
Isolated correlates of somatosensory perception in the posterior mouse cortex
COSYNE 2022
Long-term motor learning creates structure within neural space that shapes motor adaptation
COSYNE 2022
Long-term motor learning creates structure within neural space that shapes motor adaptation
COSYNE 2022
A manifold of heterogeneous vigilance states across cortical areas
COSYNE 2022
A manifold of heterogeneous vigilance states across cortical areas
COSYNE 2022
Motor cortex isolates skill-specific dynamics in a context switching task
COSYNE 2022
Motor cortex isolates skill-specific dynamics in a context switching task
COSYNE 2022
Multi-region Poisson GPFA isolates shared and independent latent structure in sensorimotor tasks
COSYNE 2022
Multi-region Poisson GPFA isolates shared and independent latent structure in sensorimotor tasks
COSYNE 2022
Behavioral and Neuronal Correlates of Exploration and Goal-Directed Navigation
Bernstein Conference 2024