Activity
activity
Spike train structure of cortical transcriptomic populations in vivo
The cortex comprises many neuronal types, which can be distinguished by their transcriptomes: the sets of genes they express. Little is known about the in vivo activity of these cell types, particularly as regards the structure of their spike trains, which might provide clues to cortical circuit function. To address this question, we used Neuropixels electrodes to record layer 5 excitatory populations in mouse V1, then transcriptomically identified the recorded cell types. To do so, we performed a subsequent recording of the same cells using 2-photon (2p) calcium imaging, identifying neurons between the two recording modalities by fingerprinting their responses to a “zebra noise” stimulus and estimating the path of the electrode through the 2p stack with a probabilistic method. We then cut brain slices and performed in situ transcriptomics to localize ~300 genes using coppaFISH3d, a new open source method, and aligned the transcriptomic data to the 2p stack. Analysis of the data is ongoing, and suggests substantial differences in spike time coordination between ET and IT neurons, as well as between transcriptomic subtypes of both these excitatory types.
Low intensity rTMS: age dependent effects, and mechanisms underlying neural plasticity
Neuroplasticity is essential for the establishment and strengthening of neural circuits. Repetitive transcranial magnetic stimulation (rTMS) is commonly used to modulate cortical excitability and shows promise in the treatment of some neurological disorders. Low intensity magnetic stimulation (LI-rTMS), which does not directly elicit action potentials in the stimulated neurons, have also shown some therapeutic effects, and it is important to determine the biological mechanisms underlying the effects of these low intensity magnetic fields, such as would occur in the regions surrounding the central high-intensity focus of rTMS. Our team has used a focal low-intensity (10mT) magnetic stimulation approach to address some of these questions and to identify cellular mechanisms. I will present several studies from our laboratory, addressing (1) effects of LIrTMS on neuronal activity and excitability ; and (2) neuronal morphology and post-lesion repair. The ensemble of our results indicate that the effects of LI-rTMS depend upon the stimulation pattern, the age of the animal, and the presence of cellular magnetoreceptors.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Representational drift in human visual cortex
Developmental and evolutionary perspectives on thalamic function
Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.
The Direct Impact Of Amyloid-Beta Oligomers On Neuronal Activity And Neurotransmitter Releases On In Vivo Analysis
Neural mechanisms of rhythmic motor control in Drosophila
All animal locomotion is rhythmic,whether it is achieved through undulatory movement of the whole body or the coordination of articulated limbs. Neurobiologists have long studied locomotor circuits that produce rhythmic activity with non-rhythmic input, also called central pattern generators (CPGs). However, the cellular and microcircuit implementation of a walking CPG has not been described for any limbed animal. New comprehensive connectomes of the fruit fly ventral nerve cord (VNC) provide an opportunity to study rhythmogenic walking circuits at a synaptic scale.We use a data-driven network modeling approach to identify and characterize a putative walking CPG in the Drosophila leg motor system.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
An inconvenient truth: pathophysiological remodeling of the inner retina in photoreceptor degeneration
Photoreceptor loss is the primary cause behind vision impairment and blindness in diseases such as retinitis pigmentosa and age-related macular degeneration. However, the death of rods and cones allows retinoids to permeate the inner retina, causing retinal ganglion cells to become spontaneously hyperactive, severely reducing the signal-to-noise ratio, and creating interference in the communication between the surviving retina and the brain. Treatments aimed at blocking or reducing hyperactivity improve vision initiated from surviving photoreceptors and could enhance the signal fidelity generated by vision restoration methodologies.
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Predicting traveling waves: a new mathematical technique to link the structure of a network to the specific patterns of neural activity
Dimensionality reduction beyond neural subspaces
Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Enhancing Real-World Event Memory
Memory is essential for shaping how we interpret the world, plan for the future, and understand ourselves, yet effective cognitive interventions for real-world episodic memory loss remain scarce. This talk introduces HippoCamera, a smartphone-based intervention inspired by how the brain supports memory, designed to enhance real-world episodic recollection by replaying high-fidelity autobiographical cues. It will showcase how our approach improves memory, mood, and hippocampal activity while uncovering links between memory distinctiveness, well-being, and the perception of time.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Beyond Homogeneity: Characterizing Brain Disorder Heterogeneity through EEG and Normative Modeling
Electroencephalography (EEG) has been thoroughly studied for decades in psychiatry research. Yet its integration into clinical practice as a diagnostic/prognostic tool remains unachieved. We hypothesize that a key reason is the underlying patient's heterogeneity, overlooked in psychiatric EEG research relying on a case-control approach. We combine HD-EEG with normative modeling to quantify this heterogeneity using two well-established and extensively investigated EEG characteristics -spectral power and functional connectivity- across a cohort of 1674 patients with attention-deficit/hyperactivity disorder, autism spectrum disorder, learning disorder, or anxiety, and 560 matched controls. Normative models showed that deviations from population norms among patients were highly heterogeneous and frequency-dependent. Deviation spatial overlap across patients did not exceed 40% and 24% for spectral and connectivity, respectively. Considering individual deviations in patients has significantly enhanced comparative analysis, and the identification of patient-specific markers has demonstrated a correlation with clinical assessments, representing a crucial step towards attaining precision psychiatry through EEG.
Optogenetic control of Nodal signaling patterns
Embryos issue instructions to their cells in the form of patterns of signaling activity. Within these patterns, the distribution of signaling in time and space directs the fate of embryonic cells. Tools to perturb developmental signaling with high resolution in space and time can help reveal how these patterns are decoded to make appropriate fate decisions. In this talk, I will present new optogenetic reagents and an experimental pipeline for creating designer Nodal signaling patterns in live zebrafish embryos. Our improved optoNodal reagents eliminate dark activity and improve response kinetics, without sacrificing dynamic range. We adapted an ultra-widefield microscopy platform for parallel light patterning in up to 36 embryos and demonstrated precise spatial control over Nodal signaling activity and downstream gene expression. Using this system, we demonstrate that patterned Nodal activation can initiate specification and internalization movements of endodermal precursors. Further, we used patterned illumination to generate synthetic signaling patterns in Nodal signaling mutants, rescuing several characteristic developmental defects. This study establishes an experimental toolkit for systematic exploration of Nodal signaling patterns in live embryos.
Physical Activity, Sedentary Behaviour and Brain Health
Marsupial joeys illuminate the onset of neural activity patterns in the developing neocortex
Metabolic-functional coupling of parvalbmunin-positive GABAergic interneurons in the injured and epileptic brain
Parvalbumin-positive GABAergic interneurons (PV-INs) provide inhibitory control of excitatory neuron activity, coordinate circuit function, and regulate behavior and cognition. PV-INs are uniquely susceptible to loss and dysfunction in traumatic brain injury (TBI) and epilepsy but the cause of this susceptibility is unknown. One hypothesis is that PV-INs use specialized metabolic systems to support their high-frequency action potential firing and that metabolic stress disrupts these systems, leading to their dysfunction and loss. Metabolism-based therapies can restore PV-IN function after injury in preclinical TBI models. Based on these findings, we hypothesize that (1) PV-INs are highly metabolically specialized, (2) these specializations are lost after TBI, and (3) restoring PV-IN metabolic specializations can improve PV-IN function as well as TBI-related outcomes. Using novel single-cell approaches, we can now quantify cell-type-specific metabolism in complex tissues to determine whether PV-IN metabolic dysfunction contributes to the pathophysiology of TBI.
Probing neural population dynamics with recurrent neural networks
Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.
Homeostatic Neural Responses to Photic Stimulation
This talk presents findings from open and closed-loop neural stimulation experiments using EEG. Fixed-frequency (10 Hz) stimulation revealed cross-cortical alpha power suppression post-stimulation, modulated by the difference between the individual's alpha frequency and the stimulation frequency. Closed-loop stimulation demonstrated phase-dependent effects: trough stimulation enhanced lower alpha activity, while peak stimulation suppressed high alpha to beta activity. These findings provide evidence for homeostatic mechanisms in the brain's response to photic stimulation, with implications for neuromodulation applications.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
The Role of Cognitive Appraisal in the Relationship between Personality and Emotional Reactivity
Emotion is defined as a rapid psychological process involving experiential, expressive and physiological responses. These emerge following an appraisal process that involves cognitive evaluations of the environment assessing its relevance, implication, coping potential, and normative significance. It has been suggested that changes in appraisal processes lead to changes in the resulting emotional nature. Simultaneously, it was demonstrated that personality can be seen as a predisposition to feel more frequently certain emotions, but the personality-appraisal-emotional response chain is rarely fully investigated. The present project thus sought to investigate the extent to which personality traits influence certain appraisals, which in turn influence the subsequent emotional reactions via a systematic analysis of the link between personality traits of different current models, specific appraisals, and emotional response patterns at the experiential, expressive, and physiological levels. Major results include the coherence of emotion components clustering, and the centrality of the pleasantness, coping potential and consequences appraisals, in context; and the differentiated mediating role of cognitive appraisal in the relation between personality and the intensity and duration of an emotional state, and autonomic arousal, such as Extraversion-pleasantness-experience, and Neuroticism-powerlessness-arousal. Elucidating these relationships deepens our understanding of individual differences in emotional reactivity and spot routes of action on appraisal processes to modify upcoming adverse emotional responses, with a broader societal impact on clinical and non-clinical populations.
Combined electrophysiological and optical recording of multi-scale neural circuit dynamics
This webinar will showcase new approaches for electrophysiological recordings using our silicon neural probes and surface arrays combined with diverse optical methods such as wide-field or 2-photon imaging, fiber photometry, and optogenetic perturbations in awake, behaving mice. Multi-modal recording of single units and local field potentials across cortex, hippocampus and thalamus alongside calcium activity via GCaMP6F in cortical neurons in triple-transgenic animals or in hippocampal astrocytes via viral transduction are brought to bear to reveal hitherto inaccessible and under-appreciated aspects of coordinated dynamics in the brain.
Roles of inhibition in stabilizing and shaping the response of cortical networks
Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Activity-Dependent Gene Regulation in Health and Disease
In the last of this year’s Brain Prize webinars, Elizabeth Pollina (Washington University, USA), Eric Nestler (Icahn School of Medicine Mount Sinai, USA) and Michelle Monje (Stanford University, USA) will present their work on activity-dependent gene regulation in health and disease. Each speaker will present for 25 minutes, and the webinar will conclude with an open discussion. The webinar will be moderated by the winners of the 2023 Brain Prize, Michael Greenberg, Erin Schuman and Christine Holt.
Executive functions in the brain of deaf individuals – sensory and language effects
Executive functions are cognitive processes that allow us to plan, monitor and execute our goals. Using fMRI, we investigated how early deafness influences crossmodal plasticity and the organisation of executive functions in the adult human brain. Results from a range of visual executive function tasks (working memory, task switching, planning, inhibition) show that deaf individuals specifically recruit superior temporal “auditory” regions during task switching. Neural activity in auditory regions predicts behavioural performance during task switching in deaf individuals, highlighting the functional relevance of the observed cortical reorganisation. Furthermore, language grammatical skills were correlated with the level of activation and functional connectivity of fronto-parietal networks. Together, these findings show the interplay between sensory and language experience in the organisation of executive processing in the brain.
Investigating activity-dependent processes during cortical neuronal assembly in development and disease
Epileptic micronetworks and their clinical relevance
A core aspect of clinical epileptology revolves around relating epileptic field potentials to underlying neural sources (e.g. an “epileptogenic focus”). Yet still, how neural population activity relates to epileptic field potentials and ultimately clinical phenomenology, remains far from being understood. After a brief overview on this topic, this seminar will focus on unpublished work, with an emphasis on seizure-related focal spreading depression. The presented results will include hippocampal and neocortical chronic in vivo two-photon population imaging and local field potential recordings of epileptic micronetworks in mice, in the context of viral encephalitis or optogenetic stimulation. The findings are corroborated by invasive depth electrode recordings (macroelectrodes and BF microwires) in epilepsy patients during pre-surgical evaluation. The presented work carries general implications for clinical epileptology, and basic epilepsy research.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.
Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory
Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Characterising Representations of Goal Obstructiveness and Uncertainty Across Behavior, Physiology, and Brain Activity Through a Video Game Paradigm
The nature of emotions and their neural underpinnings remain debated. Appraisal theories such as the component process model propose that the perception and evaluation of events (appraisal) is the key to eliciting the range of emotions we experience. Here we study whether the framework of appraisal theories provides a clearer account for the differentiation of emotional episodes and their functional organisation in the brain. We developed a stealth game to manipulate appraisals in a systematic yet immersive way. The interactive nature of video games heightens self-relevance through the experience of goal-directed action or reaction, evoking strong emotions. We show that our manipulations led to changes in behaviour, physiology and brain activations.
Astrocyte reprogramming / activation and brain homeostasis
Astrocytes are multifunctional glial cells, implicated in neurogenesis and synaptogenesis, supporting and fine-tuning neuronal activity and maintaining brain homeostasis by controlling blood-brain barrier permeability. During the last years a number of studies have shown that astrocytes can also be converted into neurons if they force-express neurogenic transcription factors or miRNAs. Direct astrocytic reprogramming to induced-neurons (iNs) is a powerful approach for manipulating cell fate, as it takes advantage of the intrinsic neural stem cell (NSC) potential of brain resident reactive astrocytes. To this end, astrocytic cell fate conversion to iNs has been well-established in vitro and in vivo using combinations of transcription factors (TFs) or chemical cocktails. Challenging the expression of lineage-specific TFs is accompanied by changes in the expression of miRNAs, that post-transcriptionally modulate high numbers of neurogenesis-promoting factors and have therefore been introduced, supplementary or alternatively to TFs, to instruct direct neuronal reprogramming. The neurogenic miRNA miR-124 has been employed in direct reprogramming protocols supplementary to neurogenic TFs and other miRNAs to enhance direct neurogenic conversion by suppressing multiple non-neuronal targets. In our group we aimed to investigate whether miR-124 is sufficient to drive direct reprogramming of astrocytes to induced-neurons (iNs) on its own both in vitro and in vivo and elucidate its independent mechanism of reprogramming action. Our in vitro data indicate that miR-124 is a potent driver of the reprogramming switch of astrocytes towards an immature neuronal fate. Elucidation of the molecular pathways being triggered by miR-124 by RNA-seq analysis revealed that miR-124 is sufficient to instruct reprogramming of cortical astrocytes to immature induced-neurons (iNs) in vitro by down-regulating genes with important regulatory roles in astrocytic function. Among these, the RNA binding protein Zfp36l1, implicated in ARE-mediated mRNA decay, was found to be a direct target of miR-124, that be its turn targets neuronal-specific proteins participating in cortical development, which get de-repressed in miR-124-iNs. Furthermore, miR-124 is potent to guide direct neuronal reprogramming of reactive astrocytes to iNs of cortical identity following cortical trauma, a novel finding confirming its robust reprogramming action within the cortical microenvironment under neuroinflammatory conditions. In parallel to their reprogramming properties, astrocytes also participate in the maintenance of blood-brain barrier integrity, which ensures the physiological functioning of the central nervous system and gets affected contributing to the pathology of several neurodegenerative diseases. To study in real time the dynamic physical interactions of astrocytes with brain vasculature under homeostatic and pathological conditions, we performed 2-photon brain intravital imaging in a mouse model of systemic neuroinflammation, known to trigger astrogliosis and microgliosis and to evoke changes in astrocytic contact with brain vasculature. Our in vivo findings indicate that following neuroinflammation the endfeet of activated perivascular astrocytes lose their close proximity and physiological cross-talk with vasculature, however this event is at compensated by the cross-talk of astrocytes with activated microglia, safeguarding blood vessel coverage and maintenance of blood-brain integrity.
Neuronal population interactions between brain areas
Most brain functions involve interactions among multiple, distinct areas or nuclei. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Using a population approach, we found that interactions between early visual cortical areas (V1 and V2) occur through a low-dimensional bottleneck, termed a communication subspace. In this talk, I will focus on the statistical methods we have developed for studying interactions between brain areas. First, I will describe Delayed Latents Across Groups (DLAG), designed to disentangle concurrent, bi-directional (i.e., feedforward and feedback) interactions between areas. Second, I will describe an extension of DLAG applicable to three or more areas, and demonstrate its utility for studying simultaneous Neuropixels recordings in areas V1, V2, and V3. Our results provide a framework for understanding how neuronal population activity is gated and selectively routed across brain areas.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Inducing short to medium neuroplastic effects with Transcranial Ultrasound Stimulation
Sound waves can be used to modify brain activity safely and transiently with unprecedented precision even deep in the brain - unlike traditional brain stimulation methods. In a series of studies in humans and non-human primates, I will show that Transcranial Ultrasound Stimulation (TUS) can have medium- to long-lasting effects. Multiple read-outs allow us to conclude that TUS can perturb neuronal tissues up to 2h after intervention, including changes in local and distributed brain network configurations, behavioural changes, task-related neuronal changes and chemical changes in the sonicated focal volume. Combined with multiple neuroimaging techniques (resting state functional Magnetic Resonance Imaging [rsfMRI], Spectroscopy [MRS] and task-related fMRI changes), this talk will focus on recent human TUS studies.
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex
Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.
Bio-realistic multiscale modeling of cortical circuits
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Movements and engagement during decision-making
When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.
Effect of nutrient sensing by microglia on mouse behavior
Microglia are the brain macrophages, eliciting multifaceted functions to maintain brain homeostasis across lifetime. To achieve this, microglia are able to sense a plethora of signals in their close environment. In the lab, we investigate the effect of nutrients on microglia function for several reasons: 1) Microglia express all the cellular machinery required to sense nutrients; 2) Eating habits have changed considerably over the last century, towards diets rich in fats and sugars; 3) This so-called "Western diet" is accompanied by an increase in the occurrence of neuropathologies, in which microglia are known to play a role. In my talk, I will present data showing how variations in nutrient intake alter microglia function, including exacerbation of synaptic pruning, with profound consequences for neuronal activity and behavior. I will also show unpublished data on the mechanisms underlying the effects of nutrients on microglia, notably through the regulation of their metabolic activity.
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
Metabolic Remodelling in the Developing Forebrain in Health and Disease
Little is known about the critical metabolic changes that neural cells have to undergo during development and how temporary shifts in this program can influence brain circuitries and behavior. Motivated by the identification of autism-associated mutations in SLC7A5, a transporter for metabolically essential large neutral amino acids (LNAAs), we utilized metabolomic profiling to investigate the metabolic states of the cerebral cortex across various developmental stages. Our findings reveal significant metabolic restructuring occurring in the forebrain throughout development, with specific groups of metabolites exhibiting stage-specific changes. Through the manipulation of Slc7a5 expression in neural cells, we discovered an interconnected relationship between the metabolism of LNAAs and lipids within the cortex. Neuronal deletion of Slc7a5 influences the postnatal metabolic state, resulting in a shift in lipid metabolism and a cell-type-specific modification in neuronal activity patterns. This ultimately gives rise to enduring circuit dysfunction.
The Brain Prize winner's webinar
In 2023, Michael Greenberg (Harvard, USA), Erin Schuman (Max Planck Institute for Brain Research, Germany) and Christine Holt (University of Cambridge, UK) were awarded The Brain Prize for their pioneering work on activity-dependent gene transcription and local mRNA translation. In this webinar, all 3 Brain Prize winners will present their work. Each speaker will present for 25 minutes and the webinar will conclude with an open discussion. The webinar will be moderated by Kelsey Martin from the Simons Foundation.
Location, time and type of epileptic activity influence how sleep modulates epilepsy
Sleep and epilepsy are tightly interconnected: On the one hand disturbed sleep is known to negatively affect epilepsy, whereas on the other hand epilepsy negatively impacts sleep. In this talk, we leverage on the unique opportunity provided by simultaneous stereo-EEG and sleep recordings to disentangle these relationships. We will discuss latest evidence on if anatomy (temporal vs. extratemporal), time (early vs. late sleep), and type of epileptic activity (ictal vs. interictal) influence how epileptic activity is modulated by sleep. After this talk, attendees will have a more nuanced understanding of the contributions of location, time and type of epileptic activity in the relationship between sleep and epilepsy.
Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception
To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.
A bottom-up approach to Activity Dependent and Activity Independent Synaptic Turnover
Bernstein Conference 2024
Chronic optogenetic stimulation has the potential to shape the collective activity of neuronal cell cultures
Bernstein Conference 2024
Connectome and task predict neural activity across the fly visual system
Bernstein Conference 2024
Activity-Dependent Network Development in Silico: The Role of Inhibition in Neuronal Growth and Migration
Bernstein Conference 2024
Cortical feedback shapes high order structure of population activity to improve sensory coding
Bernstein Conference 2024
Distinct patterns of default mode network activity differentially represent divergent thinking and mathematical reasoning.
Bernstein Conference 2024
Age Effects on Eye Blink-Related Neural Activity and Functional Connectivity in Driving
Bernstein Conference 2024
Electrogenic Na+/K+-ATPases constrain excitable cell activity and pose additional evolutionary pressure
Bernstein Conference 2024
Endogenous and exogenous brain fluctuations induce and block alpha activity
Bernstein Conference 2024
Exploring behavioral correlations with neuron activity through synaptic plasticity.
Bernstein Conference 2024
Forecasting motor cortex activity with a nonlinear latent dynamical system model
Bernstein Conference 2024
Increase in dimensionality and sparsification of neural activity over development across diverse cortical areas
Bernstein Conference 2024
Integrating activity measurements into connectome-constrained and task-optimized models
Bernstein Conference 2024
Linking Spontaneous Synaptic Activity to Learning
Bernstein Conference 2024
Flygenvectors: The spatial and temporal structure of neural activity across the fly brain
COSYNE 2022
Modulation of Spontaneous Activity Patterns in Developing Sensory Cortices via Inhibition
Bernstein Conference 2024
Presynaptic Activity-dependent calcium dynamics in cytosol & ER, and a brief proposal for a morphodynamic model of growth cone motility
Bernstein Conference 2024
Psychedelic space of neuronal population activity: emerging and disappearing contrastive dimensions
Bernstein Conference 2024
Recognizing relevant information in neural activity
Bernstein Conference 2024
State-dependent population activity, dimensionality and communication in the visual cortex
Bernstein Conference 2024
Synaptic Plasticity Mechanisms Enable Incremental Learning of Spatio-Temporal Activity Patterns
Bernstein Conference 2024
Unified C. elegans Neural Activity and Connectivity Datasets for Building Foundation Models of a Small Nervous System
Bernstein Conference 2024
Activity-dependent dendrite growth through formation and removal of synapses
COSYNE 2022
Action recognition best explains neural activity in cuneate nucleus
COSYNE 2022
Bias-free estimation of information content in temporally sparse neuronal activity
COSYNE 2022
GABAA receptors modulate anxiety-like behavior through the central amygdala area in rats with higher physical activity
FENS Forum 2024
Efficient learning of low dimensional latent dynamics in multiscale spiking and LFP population activity
COSYNE 2022
Emergence of functional circuits in the absence of neural activity
COSYNE 2022
Emergence of modular patterned activity in developing cortex through intracortical network interactions
COSYNE 2022
Environmental Statistics of Temporally Ordered Stimuli Modify Activity in the Primary Visual Cortex
COSYNE 2022
Evolution of neural activity in circuits bridging sensory and abstract knowledge
COSYNE 2022
Flygenvectors: The spatial and temporal structure of neural activity across the fly brain
COSYNE 2022
Impact of single gene mutation on circuit structure and spontaneous activity in the developing cortex
COSYNE 2022
Impact of single gene mutation on circuit structure and spontaneous activity in the developing cortex
COSYNE 2022
Indirect-projecting striatal neurons constrain timed action via ‘ramping’ activity.
COSYNE 2022
Indirect-projecting striatal neurons constrain timed action via ‘ramping’ activity.
COSYNE 2022
Input-specific regulation of locus coeruleus activity for mouse maternal behavior
COSYNE 2022
Input-specific regulation of locus coeruleus activity for mouse maternal behavior
COSYNE 2022
Inter-areal patterned microstimulation selectively drives PFC activity and behavior in a memory task
COSYNE 2022
Intrinsic dimension of neural activity: comparing artificial and biological neural networks
Bernstein Conference 2024