ALS
Latest
Decoding stress vulnerability
Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.
Consciousness at the edge of chaos
Over the last 20 years, neuroimaging and electrophysiology techniques have become central to understanding the mechanisms that accompany loss and recovery of consciousness. Much of this research is performed in the context of healthy individuals with neurotypical brain dynamics. Yet, a true understanding of how consciousness emerges from the joint action of neurons has to account for how severely pathological brains, often showing phenotypes typical of unconsciousness, can nonetheless generate a subjective viewpoint. In this presentation, I will start from the context of Disorders of Consciousness and will discuss recent work aimed at finding generalizable signatures of consciousness that are reliable across a spectrum of brain electrophysiological phenotypes focusing in particular on the notion of edge-of-chaos criticality.
Top-down control of neocortical threat memory
Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.
Biomolecular condensates as drivers of neuroinflammation
Memory Decoding Journal Club: Functional connectomics reveals general wiring rule in mouse visual cortex
Functional connectomics reveals general wiring rule in mouse visual cortex
Astrocytes: From Metabolism to Cognition
Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.
Low intensity rTMS: age dependent effects, and mechanisms underlying neural plasticity
Neuroplasticity is essential for the establishment and strengthening of neural circuits. Repetitive transcranial magnetic stimulation (rTMS) is commonly used to modulate cortical excitability and shows promise in the treatment of some neurological disorders. Low intensity magnetic stimulation (LI-rTMS), which does not directly elicit action potentials in the stimulated neurons, have also shown some therapeutic effects, and it is important to determine the biological mechanisms underlying the effects of these low intensity magnetic fields, such as would occur in the regions surrounding the central high-intensity focus of rTMS. Our team has used a focal low-intensity (10mT) magnetic stimulation approach to address some of these questions and to identify cellular mechanisms. I will present several studies from our laboratory, addressing (1) effects of LIrTMS on neuronal activity and excitability ; and (2) neuronal morphology and post-lesion repair. The ensemble of our results indicate that the effects of LI-rTMS depend upon the stimulation pattern, the age of the animal, and the presence of cellular magnetoreceptors.
Non-invasive human neuroimaging studies of motor plasticity have predominantly focused on the cerebral cortex due to low signal-to-noise ration of blood oxygen level-dependent (BOLD) signals in subcortical structures and the small effect sizes typically observed in plasticity paradigms. Precision functional mapping can help overcome these challenges and has revealed significant and reversible functional alterations in the cortico-subcortical motor circuit during arm immobilization
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Neural circuits underlying sleep structure and functions
Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Astrocytes release glutamate by regulated exocytosis in health and disease
Astrocytes release glutamate by regulated exocytosis in health and disease Vladimir Parpura, International Translational Neuroscience Research Institute, Zhejiang Chinese Medical University, Hangzhou, P.R. China Parpura will present you with the evidence that astrocytes, a subtype of glial cells in the brain, can exocytotically release the neurotransmitter glutamate and how this release is regulated. Spatiotemporal characteristic of vesicular fusion that underlie glutamate release in astrocytes will be discussed. He will also present data on a translational project in which this release pathway can be targeted for the treatment of glioblastoma, the deadliest brain cancer.
Expanding mechanisms and therapeutic targets for neurodegenerative disease
A hallmark pathological feature of the neurodegenerative diseases amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is the depletion of RNA-binding protein TDP-43 from the nucleus of neurons in the brain and spinal cord. A major function of TDP-43 is as a repressor of cryptic exon inclusion during RNA splicing. By re-analyzing RNA-sequencing datasets from human FTD/ALS brains, we discovered dozens of novel cryptic splicing events in important neuronal genes. Single nucleotide polymorphisms in UNC13A are among the strongest hits associated with FTD and ALS in human genome-wide association studies, but how those variants increase risk for disease is unknown. We discovered that TDP-43 represses a cryptic exon-splicing event in UNC13A. Loss of TDP-43 from the nucleus in human brain, neuronal cell lines and motor neurons derived from induced pluripotent stem cells resulted in the inclusion of a cryptic exon in UNC13A mRNA and reduced UNC13A protein expression. The top variants associated with FTD or ALS risk in humans are located in the intron harboring the cryptic exon, and we show that they increase UNC13A cryptic exon splicing in the face of TDP-43 dysfunction. Together, our data provide a direct functional link between one of the strongest genetic risk factors for FTD and ALS (UNC13A genetic variants), and loss of TDP-43 function. Recent analyses have revealed even further changes in TDP-43 target genes, including widespread changes in alternative polyadenylation, impacting expression of disease-relevant genes (e.g., ELP1, NEFL, and TMEM106B) and providing evidence that alternative polyadenylation is a new facet of TDP-43 pathology.
Neural mechanisms of optimal performance
When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation
Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.
Neural mechanisms of rhythmic motor control in Drosophila
All animal locomotion is rhythmic,whether it is achieved through undulatory movement of the whole body or the coordination of articulated limbs. Neurobiologists have long studied locomotor circuits that produce rhythmic activity with non-rhythmic input, also called central pattern generators (CPGs). However, the cellular and microcircuit implementation of a walking CPG has not been described for any limbed animal. New comprehensive connectomes of the fruit fly ventral nerve cord (VNC) provide an opportunity to study rhythmogenic walking circuits at a synaptic scale.We use a data-driven network modeling approach to identify and characterize a putative walking CPG in the Drosophila leg motor system.
Recent views on pre-registration
A discussion on some recent perspectives on pre-registration, which has become a growing trend in the past few years. This is not just limited to neuroimaging, and it applies to most scientific fields. We will start with this overview editorial by Simmons et al. (2021): https://faculty.wharton.upenn.edu/wp-content/uploads/2016/11/34-Simmons-Nelson-Simonsohn-2021a.pdf, and also talk about a more critical perspective by Pham & Oh (2021): https://www.researchgate.net/profile/Michel-Pham/publication/349545600_Preregistration_Is_Neither_Sufficient_nor_Necessary_for_Good_Science/links/60fb311e2bf3553b29096aa7/Preregistration-Is-Neither-Sufficient-nor-Necessary-for-Good-Science.pdf. I would like us to discuss the pros and cons of pre-registration, and if we have time, I may do a demonstration of how to perform a pre-registration through the Open Science Framework.
Decoding ketamine: Neurobiological mechanisms underlying its rapid antidepressant efficacy
Unlike traditional monoamine-based antidepressants that require weeks to exert effects, ketamine alleviates depression within hours, though its clinical use is limited by side effects. While ketamine was initially thought to work primarily through NMDA receptor (NMDAR) inhibition, our research reveals a more complex mechanism. We demonstrate that NMDAR inhibition alone cannot explain ketamine's sustained antidepressant effects, as other NMDAR antagonists like MK-801 lack similar efficacy. Instead, the (2R,6R)-hydroxynorketamine (HNK) metabolite appears critical, exhibiting antidepressant effects without ketamine's side effects. Paradoxically, our findings suggest an inverted U-shaped dose-response relationship where excessive NMDAR inhibition may actually impede antidepressant efficacy, while some level of NMDAR activation is necessary. The antidepressant actions of ketamine and (2R,6R)-HNK require AMPA receptor activation, leading to synaptic potentiation and upregulation of AMPA receptor subunits GluA1 and GluA2. Furthermore, NMDAR subunit GluN2A appears necessary and possibly sufficient for these effects. This research establishes NMDAR-GluN2A activation as a common downstream effector for rapid-acting antidepressants, regardless of their initial targets, offering promising directions for developing next-generation antidepressants with improved efficacy and reduced side effects.
Impact of High Fat Diet on Central Cardiac Circuits: When The Wanderer is Lost
Cardiac vagal motor drive originates in the brainstem's cardiac vagal motor neurons (CVNs). Despite well-established cardioinhibitory functions in health, our understanding of CVNs in disease is limited. There is a clear connection of cardiovascular regulation with metabolic and energy expenditure systems. Using high fat diet as a model, this talk will explore how metabolic dysfunction impacts the regulation of cardiac tissue through robust inhibition of CVNs. Specifically, it will present an often overlooked modality of inhibition, tonic gamma-aminobuytric acid (GABA) A-type neurotransmission using an array of techniques from single cell patch clamp electrophysiology to transgenic in vivo whole animal physiology. It also will highlight a unique interaction with the delta isoform of protein kinase C to facilitate GABA A-type receptor expression.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
What it’s like is all there is: The value of Consciousness
Over the past thirty years or so, cognitive neuroscience has made spectacular progress understanding the biological mechanisms of consciousness. Consciousness science, as this field is now sometimes called, was not only inexistent thirty years ago, but its very name seemed like an oxymoron: how can there be a science of consciousness? And yet, despite this scepticism, we are now equipped with a rich set of sophisticated behavioural paradigms, with an impressive array of techniques making it possible to see the brain in action, and with an ever-growing collection of theories and speculations about the putative biological mechanisms through which information processing becomes conscious. This is all good and fine, even promising, but we also seem to have thrown the baby out with the bathwater, or at least to have forgotten it in the crib: consciousness is not just mechanisms, it’s what it feels like. In other words, while we know thousands of informative studies about access-consciousness, we have little in the way of phenomenal consciousness. But that — what it feels like — is truly what “consciousness” is about. Understanding why it feels like something to be me and nothing (panpsychists notwithstanding) for a stone to be a stone is what the field has always been after. However, while it is relatively easy to study access-consciousness through the contrastive approach applied to reports, it is much less clear how to study phenomenology, its structure and its function. Here, I first overview work on what consciousness does (the "how"). Next, I ask what difference feeling things makes and what function phenomenology might play. I argue that subjective experience has intrinsic value and plays a functional role in everything that we do.
Structural & Functional Neuroplasticity in Children with Hemiplegia
About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
SRF | Dreem EEG Headband to Assess Sleep EEG Biomarkers in SYNGAP1
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
CNS Control of Peripheral Mitochondrial Form and Function: Mitokines
My laboratory has made an intriguing discovery that mitochondrial stress in one tissue can be communicated to distal tissues. We find that mitochondrial stress in the nervous system triggers the production of entities known as mitokines. These mitokines are discharged from the nervous system, orchestrating a response in peripheral tissues that extends the lifespan of C. elegans. The revelation came as a surprise, given the prevalent belief that cell autonomous mechanisms would underlie the relationship between mitochondrial function and aging. It was also surprising given the prevailing dogma that mitochondrial function must be increased, not decreased, to improve health and longevity. Our work also underscores the fact that mitochondria, which originated as a microbial entity and later evolved into an intracellular symbiont, have retained their capacity for intercommunication, now facilitated by signals from the nervous system. We hypothesize that this communication has evolved as a mechanism to reduce infection from pathogens.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
The neural basis of exploration and decision-making in individuals and groups
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
How do we sleep?
There is no consensus on if sleep is for the brain, body or both. But the difference in how we feel following disrupted sleep or having a good night of continuous sleep is striking. Understanding how and why we sleep will likely give insights into many aspects of health. In this talk I will outline our recent work on how the prefrontal cortex can signal to the hypothalamus to regulate sleep preparatory behaviours and sleep itself, and how other brain regions, including the ventral tegmental area, respond to psychosocial stress to induce beneficial sleep. I will also outline our work on examining the function of the glymphatic system, and whether clearance of molecules from the brain is enhanced during sleep or wakefulness.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data
There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Influence of the context of administration in the antidepressant-like effects of the psychedelic 5-MeO-DMT
Psychedelics like psilocybin have shown rapid and long-lasting efficacy on depressive and anxiety symptoms. Other psychedelics with shorter half-lives, such as DMT and 5-MeO-DMT, have also shown promising preliminary outcomes in major depression, making them interesting candidates for clinical practice. Despite several promising clinical studies, the influence of the context on therapeutic responses or adverse effects remains poorly documented. To address this, we conducted preclinical studies evaluating the psychopharmacological profile of 5-MeO-DMT in contexts previously validated in mice as either pleasant (positive setting) or aversive (negative setting). Healthy C57BL/6J male mice received a single intraperitoneal (i.p.) injection of 5-MeO-DMT at doses of 0.5, 5, and 10 mg/kg, with assessments at 2 hours, 24 hours, and one week post-administration. In a corticosterone (CORT) mouse model of depression, 5-MeO-DMT was administered in different settings, and behavioral tests mimicking core symptoms of depression and anxiety were conducted. In CORT-exposed mice, an acute dose of 0.5 mg/kg administered in a neutral setting produced antidepressant-like effects at 24 hours, as observed by reduced immobility time in the Tail Suspension Test (TST). In a positive setting, the drug also reduced latency to first immobility and total immobility time in the TST. However, these beneficial effects were negated in a negative setting, where 5-MeO-DMT failed to produce antidepressant-like effects and instead elicited an anxiogenic response in the Elevated Plus Maze (EPM).Our results indicate a strong influence of setting on the psychopharmacological profile of 5-MeO-DMT. Future experiments will examine cortical markers of pre- and post-synaptic density to correlate neuroplasticity changes with the behavioral effects of 5-MeO-DMT in different settings.
How can marsupials help us to understand neocortical evolution and plasticity?
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Probing neural population dynamics with recurrent neural networks
Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.
Retinal Photoreceptor Diversity Across Mammals
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope
Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.
Applied cognitive neuroscience to improve learning and therapeutics
Advancements in cognitive neuroscience have provided profound insights into the workings of the human brain and the methods used offer opportunities to enhance performance, cognition, and mental health. Drawing upon interdisciplinary collaborations in the University of California San Diego, Human Performance Optimization Lab, this talk explores the application of cognitive neuroscience principles in three domains to improve human performance and alleviate mental health challenges. The first section will discuss studies addressing the role of vision and oculomotor function in athletic performance and the potential to train these foundational abilities to improve performance and sports outcomes. The second domain considers the use of electrophysiological measurements of the brain and heart to detect, and possibly predict, errors in manual performance, as shown in a series of studies with surgeons as they perform robot-assisted surgery. Lastly, findings from clinical trials testing personalized interventional treatments for mood disorders will be discussed in which the temporal and spatial parameters of transcranial magnetic stimulation (TMS) are individualized to test if personalization improves treatment response and can be used as predictive biomarkers to guide treatment selection. Together, these translational studies use the measurement tools and constructs of cognitive neuroscience to improve human performance and well-being.
Competition and integration of sensory signals in a deep reinforcement learning agent
Bernstein Conference 2024
Controlled sampling of non-equilibrium brain dynamics: modeling and estimation from neuroimaging signals
Bernstein Conference 2024
Computational modelling of dentate granule cells reveals Pareto optimal trade-off between pattern separation and energy efficiency (economy)
Bernstein Conference 2024
Deep inverse modeling reveals dynamic-dependent invariances in neural circuits mechanisms
Bernstein Conference 2024
Distinct contributions of prefrontal, parietal, and cingulate signals to exploratory decisions.
Bernstein Conference 2024
Knocking out co-active plasticity rules in neural networks reveals synapse type-specific contributions for learning and memory
Bernstein Conference 2024
Multi-scale single-cycle analysis of cortex-wide wave dynamics reveals complex spatio-temporal structure
Bernstein Conference 2024
Reverse engineering recurrent network models reveals mechanisms for location memory
Bernstein Conference 2024
A Single-Layer Neuromorphic Encoder Maps EMG Signals into Wrist Kinematics
Bernstein Conference 2024
Structured signals by a loss of structure: causes of burst-suppression EEG
Bernstein Conference 2024
Unsupervised clustering of burst shapes reveals the increasing complexity of developing networks in vitro
Bernstein Conference 2024
Biological multi-task learning with top-down signals
COSYNE 2022
Chromatic contrast and angle of polarization signals are integrated in the Drosophila central complex
COSYNE 2022
Dendritic integration of thalamic HD signals and retrosplenial input in presubicular neurons
COSYNE 2022
Dynamical systems analysis reveals a novel hypothalamic encoding of state in nodes controlling social behavior
COSYNE 2022
Exploiting color space geometry for visual stimulus design across animals
COSYNE 2022
Fast ACh signals and the optimal control of attention in a detection task
COSYNE 2022
Flexible cue anchoring strategies enable stable head direction coding in blind animals
COSYNE 2022
Flexible cue anchoring strategies enable stable head direction coding in blind animals
COSYNE 2022
Hierarchical modularity in Drosophila brain reveals novel organizational principles
COSYNE 2022
High-level prediction signals cascade through the macaque face-processing hierarchy
COSYNE 2022
Hierarchical modularity in Drosophila brain reveals novel organizational principles
COSYNE 2022
High-level prediction signals cascade through the macaque face-processing hierarchy
COSYNE 2022
Holographic activation of neural ensembles reveals both space and feature based cortical microcircuitry
COSYNE 2022
Holographic activation of neural ensembles reveals both space and feature based cortical microcircuitry
COSYNE 2022
Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals
COSYNE 2022
Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals
COSYNE 2022
Novelty modulates neural coding and reveals functional diversity within excitatory and inhibitory populations in the visual cortex
COSYNE 2022
Novelty modulates neural coding and reveals functional diversity within excitatory and inhibitory populations in the visual cortex
COSYNE 2022
Representations of supra-second time intervals in the cerebellum of larval zebrafish
COSYNE 2022
Representations of supra-second time intervals in the cerebellum of larval zebrafish
COSYNE 2022
Social cues modulate circuit dynamics to control the choice between communication signals in flies
COSYNE 2022
Social cues modulate circuit dynamics to control the choice between communication signals in flies
COSYNE 2022
A new tool for automated annotation of complex birdsong reveals dynamics of canary syntax rules
COSYNE 2022
A new tool for automated annotation of complex birdsong reveals dynamics of canary syntax rules
COSYNE 2022
Automated neuron tracking inside moving and deforming animals using deep learning and targeted augmentation
COSYNE 2023
Blazed oblique plane microscopy reveals scale-invariant predictions of brain-wide activity
COSYNE 2023
Cross-trial alignment reveals a low-dimensional cortical manifold of naturalistic speech production
COSYNE 2023
Detecting rhythmic spiking through the power spectra of point process model residuals
COSYNE 2023
Adaptive brain-computer interfaces based on error-related potentials and reinforcement learning
Bernstein Conference 2024
ALS coverage
90 items