action
Latest
Consciousness at the edge of chaos
Over the last 20 years, neuroimaging and electrophysiology techniques have become central to understanding the mechanisms that accompany loss and recovery of consciousness. Much of this research is performed in the context of healthy individuals with neurotypical brain dynamics. Yet, a true understanding of how consciousness emerges from the joint action of neurons has to account for how severely pathological brains, often showing phenotypes typical of unconsciousness, can nonetheless generate a subjective viewpoint. In this presentation, I will start from the context of Disorders of Consciousness and will discuss recent work aimed at finding generalizable signatures of consciousness that are reliable across a spectrum of brain electrophysiological phenotypes focusing in particular on the notion of edge-of-chaos criticality.
Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism
Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions
Astrocytes: From Metabolism to Cognition
Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.
Low intensity rTMS: age dependent effects, and mechanisms underlying neural plasticity
Neuroplasticity is essential for the establishment and strengthening of neural circuits. Repetitive transcranial magnetic stimulation (rTMS) is commonly used to modulate cortical excitability and shows promise in the treatment of some neurological disorders. Low intensity magnetic stimulation (LI-rTMS), which does not directly elicit action potentials in the stimulated neurons, have also shown some therapeutic effects, and it is important to determine the biological mechanisms underlying the effects of these low intensity magnetic fields, such as would occur in the regions surrounding the central high-intensity focus of rTMS. Our team has used a focal low-intensity (10mT) magnetic stimulation approach to address some of these questions and to identify cellular mechanisms. I will present several studies from our laboratory, addressing (1) effects of LIrTMS on neuronal activity and excitability ; and (2) neuronal morphology and post-lesion repair. The ensemble of our results indicate that the effects of LI-rTMS depend upon the stimulation pattern, the age of the animal, and the presence of cellular magnetoreceptors.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation
Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.
Active Predictive Coding and the Primacy of Actions in Natural and Artificial Intelligence
Decoding ketamine: Neurobiological mechanisms underlying its rapid antidepressant efficacy
Unlike traditional monoamine-based antidepressants that require weeks to exert effects, ketamine alleviates depression within hours, though its clinical use is limited by side effects. While ketamine was initially thought to work primarily through NMDA receptor (NMDAR) inhibition, our research reveals a more complex mechanism. We demonstrate that NMDAR inhibition alone cannot explain ketamine's sustained antidepressant effects, as other NMDAR antagonists like MK-801 lack similar efficacy. Instead, the (2R,6R)-hydroxynorketamine (HNK) metabolite appears critical, exhibiting antidepressant effects without ketamine's side effects. Paradoxically, our findings suggest an inverted U-shaped dose-response relationship where excessive NMDAR inhibition may actually impede antidepressant efficacy, while some level of NMDAR activation is necessary. The antidepressant actions of ketamine and (2R,6R)-HNK require AMPA receptor activation, leading to synaptic potentiation and upregulation of AMPA receptor subunits GluA1 and GluA2. Furthermore, NMDAR subunit GluN2A appears necessary and possibly sufficient for these effects. This research establishes NMDAR-GluN2A activation as a common downstream effector for rapid-acting antidepressants, regardless of their initial targets, offering promising directions for developing next-generation antidepressants with improved efficacy and reduced side effects.
Impact of High Fat Diet on Central Cardiac Circuits: When The Wanderer is Lost
Cardiac vagal motor drive originates in the brainstem's cardiac vagal motor neurons (CVNs). Despite well-established cardioinhibitory functions in health, our understanding of CVNs in disease is limited. There is a clear connection of cardiovascular regulation with metabolic and energy expenditure systems. Using high fat diet as a model, this talk will explore how metabolic dysfunction impacts the regulation of cardiac tissue through robust inhibition of CVNs. Specifically, it will present an often overlooked modality of inhibition, tonic gamma-aminobuytric acid (GABA) A-type neurotransmission using an array of techniques from single cell patch clamp electrophysiology to transgenic in vivo whole animal physiology. It also will highlight a unique interaction with the delta isoform of protein kinase C to facilitate GABA A-type receptor expression.
What it’s like is all there is: The value of Consciousness
Over the past thirty years or so, cognitive neuroscience has made spectacular progress understanding the biological mechanisms of consciousness. Consciousness science, as this field is now sometimes called, was not only inexistent thirty years ago, but its very name seemed like an oxymoron: how can there be a science of consciousness? And yet, despite this scepticism, we are now equipped with a rich set of sophisticated behavioural paradigms, with an impressive array of techniques making it possible to see the brain in action, and with an ever-growing collection of theories and speculations about the putative biological mechanisms through which information processing becomes conscious. This is all good and fine, even promising, but we also seem to have thrown the baby out with the bathwater, or at least to have forgotten it in the crib: consciousness is not just mechanisms, it’s what it feels like. In other words, while we know thousands of informative studies about access-consciousness, we have little in the way of phenomenal consciousness. But that — what it feels like — is truly what “consciousness” is about. Understanding why it feels like something to be me and nothing (panpsychists notwithstanding) for a stone to be a stone is what the field has always been after. However, while it is relatively easy to study access-consciousness through the contrastive approach applied to reports, it is much less clear how to study phenomenology, its structure and its function. Here, I first overview work on what consciousness does (the "how"). Next, I ask what difference feeling things makes and what function phenomenology might play. I argue that subjective experience has intrinsic value and plays a functional role in everything that we do.
Regulation of cortical circuit maturation and plasticity by oligodendrocytes and myelin
Spatio-temporal Regulation of Gene Expression in Neurons: Insights from Imaging mRNAs Live in Action
The synaptic functions of Alpha Synuclein and Lrrk2
Alpha synuclein and Lrrk2 are key players in Parkinson's disease and related disorders, but their normal role has been confusing and controversial. Data from acute gene-editing based knockdown, followed by functional assays, will be presented.
Digital Minds: Brain Development in the Age of Technology
Digital Minds: Brain Development in the Age of Technology examines how our increasingly connected world shapes mental and cognitive health. From screen time and social media to virtual interactions, this seminar delves into the latest research on how technology influences brain development, relationships, and emotional well-being. Join us to explore strategies for harnessing technology's benefits while mitigating its potential challenges, empowering you to thrive in a digital age.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Rethinking Attention: Dynamic Prioritization
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.
Mapping the neural dynamics of dominance and defeat
Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.
SWEBAGS conference 2024: The basal ganglia in action
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
How the brain barriers ensure CNSimmune privilege”
Britta Engelhard’s research is devoted to understanding thefunction of the different brain barriers in regulating CNS immunesurveillance and how their impaired function contributes toneuroinflammatory diseases such as Multiple Sclerosis (MS) orAlzheimer’s disease (AD). Her laboratory combines expertise invascular biology, neuroimmunology and live cell imaging and hasdeveloped sophisticated in vitro and in vivo approaches to studyimmune cell interactions with the brain barriers in health andneuroinflammation.
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
Personalized medicine and predictive health and wellness: Adding the chemical component
Wearable sensors that detect and quantify biomarkers in retrievable biofluids (e.g., interstitial fluid, sweat, tears) provide information on human dynamic physiological and psychological states. This information can transform health and wellness by providing actionable feedback. Due to outdated and insufficiently sensitive technologies, current on-body sensing systems have capabilities limited to pH, and a few high-concentration electrolytes, metabolites, and nutrients. As such, wearable sensing systems cannot detect key low-concentration biomarkers indicative of stress, inflammation, metabolic, and reproductive status. We are revolutionizing sensing. Our electronic biosensors detect virtually any signaling molecule or metabolite at ultra-low levels. We have monitored serotonin, dopamine, cortisol, phenylalanine, estradiol, progesterone, and glucose in blood, sweat, interstitial fluid, and tears. The sensors are based on modern nanoscale semiconductor transistors that are straightforwardly scalable for manufacturing. We are developing sensors for >40 biomarkers for personalized continuous monitoring (e.g., smartwatch, wearable patch) that will provide feedback for treating chronic health conditions (e.g., perimenopause, stress disorders, phenylketonuria). Moreover, our sensors will enable female fertility monitoring and the adoption of more healthy lifestyles to prevent disease and improve physical and cognitive performance.
Metabolic-functional coupling of parvalbmunin-positive GABAergic interneurons in the injured and epileptic brain
Parvalbumin-positive GABAergic interneurons (PV-INs) provide inhibitory control of excitatory neuron activity, coordinate circuit function, and regulate behavior and cognition. PV-INs are uniquely susceptible to loss and dysfunction in traumatic brain injury (TBI) and epilepsy but the cause of this susceptibility is unknown. One hypothesis is that PV-INs use specialized metabolic systems to support their high-frequency action potential firing and that metabolic stress disrupts these systems, leading to their dysfunction and loss. Metabolism-based therapies can restore PV-IN function after injury in preclinical TBI models. Based on these findings, we hypothesize that (1) PV-INs are highly metabolically specialized, (2) these specializations are lost after TBI, and (3) restoring PV-IN metabolic specializations can improve PV-IN function as well as TBI-related outcomes. Using novel single-cell approaches, we can now quantify cell-type-specific metabolism in complex tissues to determine whether PV-IN metabolic dysfunction contributes to the pathophysiology of TBI.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Visuomotor learning of location, action, and prediction
Cerebellum-Basal Ganglia Interactions
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Dopamine Acetylcholine interactions
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
The Mirror Mechanism
Neurovascular Interactions: Mechanisms, Imaging, Therapeutics
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Measures and models of multisensory integration in reaction times
First, a new measure of MI for reaction times is proposed that takes the entire RT distribution into account. Second, we present some recent developments in TWIN modeling, including a new proposal for the sound-induced flash illusion (SIFI).
Machine learning for reconstructing, understanding and intervening on neural interactions
Astrocyte reprogramming / activation and brain homeostasis
Astrocytes are multifunctional glial cells, implicated in neurogenesis and synaptogenesis, supporting and fine-tuning neuronal activity and maintaining brain homeostasis by controlling blood-brain barrier permeability. During the last years a number of studies have shown that astrocytes can also be converted into neurons if they force-express neurogenic transcription factors or miRNAs. Direct astrocytic reprogramming to induced-neurons (iNs) is a powerful approach for manipulating cell fate, as it takes advantage of the intrinsic neural stem cell (NSC) potential of brain resident reactive astrocytes. To this end, astrocytic cell fate conversion to iNs has been well-established in vitro and in vivo using combinations of transcription factors (TFs) or chemical cocktails. Challenging the expression of lineage-specific TFs is accompanied by changes in the expression of miRNAs, that post-transcriptionally modulate high numbers of neurogenesis-promoting factors and have therefore been introduced, supplementary or alternatively to TFs, to instruct direct neuronal reprogramming. The neurogenic miRNA miR-124 has been employed in direct reprogramming protocols supplementary to neurogenic TFs and other miRNAs to enhance direct neurogenic conversion by suppressing multiple non-neuronal targets. In our group we aimed to investigate whether miR-124 is sufficient to drive direct reprogramming of astrocytes to induced-neurons (iNs) on its own both in vitro and in vivo and elucidate its independent mechanism of reprogramming action. Our in vitro data indicate that miR-124 is a potent driver of the reprogramming switch of astrocytes towards an immature neuronal fate. Elucidation of the molecular pathways being triggered by miR-124 by RNA-seq analysis revealed that miR-124 is sufficient to instruct reprogramming of cortical astrocytes to immature induced-neurons (iNs) in vitro by down-regulating genes with important regulatory roles in astrocytic function. Among these, the RNA binding protein Zfp36l1, implicated in ARE-mediated mRNA decay, was found to be a direct target of miR-124, that be its turn targets neuronal-specific proteins participating in cortical development, which get de-repressed in miR-124-iNs. Furthermore, miR-124 is potent to guide direct neuronal reprogramming of reactive astrocytes to iNs of cortical identity following cortical trauma, a novel finding confirming its robust reprogramming action within the cortical microenvironment under neuroinflammatory conditions. In parallel to their reprogramming properties, astrocytes also participate in the maintenance of blood-brain barrier integrity, which ensures the physiological functioning of the central nervous system and gets affected contributing to the pathology of several neurodegenerative diseases. To study in real time the dynamic physical interactions of astrocytes with brain vasculature under homeostatic and pathological conditions, we performed 2-photon brain intravital imaging in a mouse model of systemic neuroinflammation, known to trigger astrogliosis and microgliosis and to evoke changes in astrocytic contact with brain vasculature. Our in vivo findings indicate that following neuroinflammation the endfeet of activated perivascular astrocytes lose their close proximity and physiological cross-talk with vasculature, however this event is at compensated by the cross-talk of astrocytes with activated microglia, safeguarding blood vessel coverage and maintenance of blood-brain integrity.
Sensory Consequences of Visual Actions
We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.
Neuronal population interactions between brain areas
Most brain functions involve interactions among multiple, distinct areas or nuclei. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Using a population approach, we found that interactions between early visual cortical areas (V1 and V2) occur through a low-dimensional bottleneck, termed a communication subspace. In this talk, I will focus on the statistical methods we have developed for studying interactions between brain areas. First, I will describe Delayed Latents Across Groups (DLAG), designed to disentangle concurrent, bi-directional (i.e., feedforward and feedback) interactions between areas. Second, I will describe an extension of DLAG applicable to three or more areas, and demonstrate its utility for studying simultaneous Neuropixels recordings in areas V1, V2, and V3. Our results provide a framework for understanding how neuronal population activity is gated and selectively routed across brain areas.
Multisensory perception, learning, and memory
Note the later start time!
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Gut/Body interactions in health and disease
The adult intestine is a major barrier epithelium and coordinator of multi-organ functions. Stem cells constantly repair the intestinal epithelium by adjusting their proliferation and differentiation to tissue intrinsic as well as micro- and macro-environmental signals. How these signals integrate to control intestinal and whole-body homeostasis is largely unknown. Addressing this gap in knowledge is central to an improved understanding of intestinal pathophysiology and its systemic consequences. Combining Drosophila and mammalian model systems my laboratory has discovered fundamental mechanisms driving intestinal regeneration and tumourigenesis and outlined complex inter-organ signaling regulating health and disease. During my talk, I will discuss inter-related areas of research from my lab, including:1- Interactions between the intestine and its microenvironment influencing intestinal regeneration and tumourigenesis. 2- Long-range signals from the intestine impacting whole-body in health and disease.
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Multisensory integration in peripersonal space (PPS) for action, perception and consciousness
Note the later time in the USA!
A recurrent network model of planning predicts hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.
Co-development of accommodation and vergence and quantification of their interaction
Bernstein Conference 2024
Cellular action potential generation: a key player in setting the network state
Bernstein Conference 2024
Circuit Mechanisms for Dynamic Social Interactions
Bernstein Conference 2024
Information transfer during dyadic interactions in perceptual decision-making.
Bernstein Conference 2024
Modulation of Spike-timing-dependent Plasticity via the Interaction of Astrocyte-regulated D-serine with NMDA Receptors
Bernstein Conference 2024
Role of local Kenyon cell – Kenyon Cell interactions in the γ lobe of Drosophila melanogaster for specificity in olfactory learning
Bernstein Conference 2024
Semantic Embodiment: Decoding Action Words through Topographic Neuronal Representation with Brain-Constrained Network
Bernstein Conference 2024
Timing and transmission: the role of axonal action potential propagation speed in the synchronization of foveal vision
Bernstein Conference 2024
Uncovering neural circuit’s motifs and animal states using higher-order interactions
Bernstein Conference 2024
Action recognition best explains neural activity in cuneate nucleus
COSYNE 2022
AutSim: Principled, data driven model development and abstraction for signaling in synaptic protein synthesis in Fragile X Syndrome (FXS) and healthy control.
COSYNE 2022
Correlation-based motion detectors in olfaction enable turbulent plume navigation
COSYNE 2022
Dynamic and structured action bias masks learned task contingencies early in learning
COSYNE 2022
Electrical but not optogenetic stimulation drives nonlinear contraction of neural states
COSYNE 2022
Emergence of modular patterned activity in developing cortex through intracortical network interactions
COSYNE 2022
Fitting recurrent spiking network models to study the interaction between cortical areas
COSYNE 2022
Fitting recurrent spiking network models to study the interaction between cortical areas
COSYNE 2022
Gaussian Partial Information Decomposition: Quantifying Inter-areal Interactions in High-Dimensional Neural Data
COSYNE 2022
Gaussian Partial Information Decomposition: Quantifying Inter-areal Interactions in High-Dimensional Neural Data
COSYNE 2022
Hierarchical interaction between memory units with distinct dynamics enables higher-order learning
COSYNE 2022
Hierarchical interaction between memory units with distinct dynamics enables higher-order learning
COSYNE 2022
Indirect-projecting striatal neurons constrain timed action via ‘ramping’ activity.
COSYNE 2022
Indirect-projecting striatal neurons constrain timed action via ‘ramping’ activity.
COSYNE 2022
Long-term consequences of actions affect human exploration in structured environments
COSYNE 2022
Long-term consequences of actions affect human exploration in structured environments
COSYNE 2022
Mice identify subgoal locations through an action-driven mapping process
COSYNE 2022
Mice identify subgoal locations through an action-driven mapping process
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Regionally distinct striatal circuits support broadly opponent aspects of action suppression and production
COSYNE 2022
Regionally distinct striatal circuits support broadly opponent aspects of action suppression and production
COSYNE 2022
'Silent' olfactory bulb mitral cells emerge from common feature subtraction.
COSYNE 2022
'Silent' olfactory bulb mitral cells emerge from common feature subtraction.
COSYNE 2022
Anisotropy in visual crowding is reflected in inter-laminar interactions of macaque V1
COSYNE 2023
Brain-wide, specialized and state-dependent cortical encoding of reward, value and action switching during reversal learning
COSYNE 2023
A causal inference model of spike train interactions in fast response regimes
COSYNE 2023
Computational and behavioral mechanisms underlying selecting, stopping, and switching of actions
COSYNE 2023
Dissecting multi-population interactions across cortical areas and layers
COSYNE 2023
Dissection of inter-area interactions of motor circuits
COSYNE 2023
Adversarial-inspired autoencoder framework for salient sensory feature extraction
Bernstein Conference 2024
action coverage
90 items