Brain Regions
brain regions
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
How do we sleep?
There is no consensus on if sleep is for the brain, body or both. But the difference in how we feel following disrupted sleep or having a good night of continuous sleep is striking. Understanding how and why we sleep will likely give insights into many aspects of health. In this talk I will outline our recent work on how the prefrontal cortex can signal to the hypothalamus to regulate sleep preparatory behaviours and sleep itself, and how other brain regions, including the ventral tegmental area, respond to psychosocial stress to induce beneficial sleep. I will also outline our work on examining the function of the glymphatic system, and whether clearance of molecules from the brain is enhanced during sleep or wakefulness.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
Off-policy learning in the basal ganglia
I will discuss work with Jack Lindsey modeling reinforcement learning for action selection in the basal ganglia. I will argue that the presence of multiple brain regions, in addition to the basal ganglia, that contribute to motor control motivates the need for an off-policy basal ganglia learning algorithm. I will then describe a biological implementation of such an algorithm that predicts tuning of dopamine neurons to a quantity we call "action surprise," in addition to reward prediction error. In the same model, an implementation of learning from a motor efference copy also predicts a novel solution to the problem of multiplexing feedforward and efference-related striatal activity. The solution exploits the difference between D1 and D2-expressing medium spiny neurons and leads to predictions about striatal dynamics.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Self-perception: mechanosensation and beyond
Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.
A specialized role for entorhinal attractor dynamics in combining path integration and landmarks during navigation
During navigation, animals estimate their position using path integration and landmarks. In a series of two studies, we used virtual reality and electrophysiology to dissect how these inputs combine to generate the brain’s spatial representations. In the first study (Campbell et al., 2018), we focused on the medial entorhinal cortex (MEC) and its set of navigationally-relevant cell types, including grid cells, border cells, and speed cells. We discovered that attractor dynamics could explain an array of initially puzzling MEC responses to virtual reality manipulations. This theoretical framework successfully predicted both MEC grid cell responses to additional virtual reality manipulations, as well as mouse behavior in a virtual path integration task. In the second study (Campbell*, Attinger* et al., 2021), we asked whether these principles generalize to other navigationally-relevant brain regions. We used Neuropixels probes to record thousands of neurons from MEC, primary visual cortex (V1), and retrosplenial cortex (RSC). In contrast to the prevailing view that “everything is everywhere all at once,” we identified a unique population of MEC neurons, overlapping with grid cells, that became active with striking spatial periodicity while head-fixed mice ran on a treadmill in darkness. These neurons exhibited unique cue-integration properties compared to other MEC, V1, or RSC neurons: they remapped more readily in response to conflicts between path integration and landmarks; they coded position prospectively as opposed to retrospectively; they upweighted path integration relative to landmarks in conditions of low visual contrast; and as a population, they exhibited a lower-dimensional activity structure. Based on these results, our current view is that MEC attractor dynamics play a privileged role in resolving conflicts between path integration and landmarks during navigation. Future work should include carefully designed causal manipulations to rigorously test this idea, and expand the theoretical framework to incorporate notions of uncertainty and optimality.
The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there
Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).
Representations of people in the brain
Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).
Trial by trial predictions of subjective time from human brain activity
Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time
An open-source miniature two-photon microscope for large-scale calcium imaging in freely moving mice
Due to the unsuitability of benchtop imaging for tasks that require unrestrained movement, investigators have tried, for almost two decades, to develop miniature 2P microscopes-2P miniscopes–that can be carried on the head of freely moving animals. In this talk, I would first briefly review the development history of this technique, and then report our latest progress on developing the new generation of 2P miniscopes, MINI2P, that overcomes the limits of previous versions by both meeting requirements for fatigue-free exploratory behavior during extended recording periods and satisfying demands for further increasing the cell yield by an order of magnitude, to thousands of neurons. The performance and reliability of MINI2P are validated by recordings of spatially tuned neurons in three brain regions and in three behavioral assays. All information about MINI2P is open access, with instruction videos, code, and manuals on public repositories, and workshops will be organized to help new users getting started. MINI2P permits large-scale and high-resolution calcium imaging in freely-moving mice, and opens the door to investigating brain functions during unconstrained natural behaviors.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Modularity and Robustness of Frontal Cortical Networks
Nuo Li (Baylor College of Medicine, USA) shares novel insights into coordinated interhemispheric large-scale neural network activity underpinning short-term memory in mice. Relevant techniques covered include: simultaneous multi-regional recordings using multiple 64-channel H probes during head-fixed behavior in mice. simultaneous optogenetics and population recording. analysis of population recordings to infer interactions between brain regions. Reference: Chen G, Kang B, Lindsey J, Druckmann S, Li N, (2021). Modularity and robustness of frontal cortex networks. Cell, 184(14):3717-3730.
How are nervous systems remodeled in complex metazoans?
Early in development the nervous system is constructed with far too many neurons that make an excessive number of synaptic connections. Later, a wave of neuronal remodeling radically reshapes nervous system wiring and cell numbers through the selective elimination of excess synapses, axons and dendrites, and even whole neurons. This remodeling is widespread across the nervous system, extensive in terms of how much individual brain regions can change (e.g. in some cases 50% of neurons integrated into a brain circuit are eliminated), and thought to be essential for optimizing nervous system function. Perturbations of neuronal remodeling are thought to underlie devastating neurodevelopmental disorders including autism spectrum disorder, schizophrenia, and epilepsy. This seminar will discuss our efforts to use the relatively simple nervous system of Drosophila to understand the mechanistic basis by which cells, or parts of cells, are specified for removal and eliminated from the nervous system.
Brain and Mind: Who is the Puppet and who the Puppeteer?
If the mind controls the brain, then there is free will and its corollaries, dignity and responsibility. You are king in your skull-sized kingdom and the architect of your destiny. If, on the other hand, the brain controls the mind, an incendiary conclusion follows: There can be no free will, no praise, no punishment and no purgatory. In this webinar, Professor George Paxinos will discuss his highly respected work on the construction of human and experimental animal brain atlases. He has discovered 94 brain regions, 64 homologies and published 58 books. His first book, The Rat Brain in Stereotaxic Coordinates, is the most cited publication in neuroscience and, for three decades, the third most cited book in science. Professor Paxinos will also present his recently published novel, A River Divided, which was 21 years in the making. Neuroscience principles were used in the formation of charters, such as those related to the mind, soul, free will and consciousness. Environmental issues are at the heart of the novel, including the question of whether the brain is the right ‘size’ for survival. Professor Paxinos studied at Berkeley, McGill and Yale and is now Scientia Professor of Medical Sciences at Neuroscience Research Australia and The University of New South Wales in Sydney.
The functional connectome across temporal scales
The view of human brain function has drastically shifted over the last decade, owing to the observation that the majority of brain activity is intrinsic rather than driven by external stimuli or cognitive demands. Specifically, all brain regions continuously communicate in spatiotemporally organized patterns that constitute the functional connectome, with consequences for cognition and behavior. In this talk, I will argue that another shift is underway, driven by new insights from synergistic interrogation of the functional connectome using different acquisition methods. The human functional connectome is typically investigated with functional magnetic resonance imaging (fMRI) that relies on the indirect hemodynamic signal, thereby emphasizing very slow connectivity across brain regions. Conversely, more recent methodological advances demonstrate that fast connectivity within the whole-brain connectome can be studied with real-time methods such as electroencephalography (EEG). Our findings show that combining fMRI with scalp or intracranial EEG in humans, especially when recorded concurrently, paints a rich picture of neural communication across the connectome. Specifically, the connectome comprises both fast, oscillation-based connectivity observable with EEG, as well as extremely slow processes best captured by fMRI. While the fast and slow processes share an important degree of spatial organization, these processes unfold in a temporally independent manner. Our observations suggest that fMRI and EEG may be envisaged as capturing distinct aspects of functional connectivity, rather than intermodal measurements of the same phenomenon. Infraslow fluctuation-based and rapid oscillation-based connectivity of various frequency bands constitute multiple dynamic trajectories through a shared state space of discrete connectome configurations. The multitude of flexible trajectories may concurrently enable functional connectivity across multiple independent sets of distributed brain regions.
Orbitofrontal cortex and the integrative approach to functional neuroanatomy
The project of functional neuroanatomy typically considers single brain areas as the core functional unit of the brain. Functional neuroanatomists typically use specialized tasks that are designed to isolate hypothesized functions from other cognitive processes. Our lab takes a broader view; specifically, we consider brain regions as parts of larger circuits and we take cognitive processes as part of more complex behavioral repertoires. In my talk, I will discuss the ramifications of this perspective for thinking about the role of the orbitofrontal cortex. I will discuss results of recent experiments from my lab that tackle the question of OFC function within the context of larger brain networks and in freely moving foraging tasks. I will argue that this perspective challenges conventional accounts of the role of OFC and invites new ones. I will conclude by speculating on implications for the practice of functional neuroanatomy.
Integrators in short- and long-term memory
The accumulation and storage of information in memory is a fundamental computation underlying animal behavior. In many brain regions and task paradigms, ranging from motor control to navigation to decision-making, such accumulation is accomplished through neural integrator circuits that enable external inputs to move a system’s population-wide patterns of neural activity along a continuous attractor. In the first portion of the talk, I will discuss our efforts to dissect the circuit mechanisms underlying a neural integrator from a rich array of anatomical, physiological, and perturbation experiments. In the second portion of the talk, I will show how the accumulation and storage of information in long-term memory may also be described by attractor dynamics, but now within the space of synaptic weights rather than neural activity. Altogether, this work suggests a conceptual unification of seemingly distinct short- and long-term memory processes.
Functional ultrasound imaging during behavior
The dream of a systems neuroscientist is to be able to unravel neural mechanisms that give rise to behavior. It is increasingly appreciated that behavior involves the concerted distributed activity of multiple brain regions so the focus on single or few brain areas might hinder our understanding. There have been quite a few technological advancements in this domain. Functional ultrasound imaging (fUSi) is an emerging technique that allows us to measure neural activity from medial frontal regions down to subcortical structures up to a depth of 20 mm. It is a method for imaging transient changes in cerebral blood volume (CBV), which are proportional to neural activity changes. It has excellent spatial resolution (~100 μm X 100 μm X 400 μm); its temporal resolution can go down to 100 milliseconds. In this talk, I will present its use in two model systems: marmoset monkeys and rats. In marmoset monkeys, we used it to delineate a social – vocal network involved in vocal communication while in rats, we used it to gain insights into brain wide networks involved in evidence accumulation based decision making. fUSi has the potential to provide an unprecedented access to brain wide dynamics in freely moving animals performing complex behavioral tasks.
A Network for Computing Value Equilibrium in the Human Medial Prefrontal Corte
Humans and other animals make decisions in order to satisfy their goals. However, it remains unknown how neural circuits compute which of multiple possible goals should be pursued (e.g., when balancing hunger and thirst) and how to combine these signals with estimates of available reward alternatives. Here, humans undergoing fMRI accumulated two distinct assets over a sequence of trials. Financial outcomes depended on the minimum cumulate of either asset, creating a need to maintain “value equilibrium” by redressing any imbalance among the assets. Blood-oxygen-level-dependent (BOLD) signals in the rostral anterior cingulate cortex (rACC) tracked the level of imbalance among goals, whereas the ventromedial prefrontal cortex (vmPFC) signaled the level of redress incurred by a choice rather than the overall amount received. These results suggest that a network of medial frontal brain regions compute a value signal that maintains value equilibrium among internal goals.
Inhibitory connectivity and computations in olfaction
We use the olfactory system and forebrain of (adult) zebrafish as a model to analyze how relevant information is extracted from sensory inputs, how information is stored in memory circuits, and how sensory inputs inform behavior. A series of recent findings provides evidence that inhibition has not only homeostatic functions in neuronal circuits but makes highly specific, instructive contributions to behaviorally relevant computations in different brain regions. These observations imply that the connectivity among excitatory and inhibitory neurons exhibits essential higher-order structure that cannot be determined without dense network reconstructions. To analyze such connectivity we developed an approach referred to as “dynamical connectomics” that combines 2-photon calcium imaging of neuronal population activity with EM-based dense neuronal circuit reconstruction. In the olfactory bulb, this approach identified specific connectivity among co-tuned cohorts of excitatory and inhibitory neurons that can account for the decorrelation and normalization (“whitening”) of odor representations in this brain region. These results provide a mechanistic explanation for a fundamental neural computation that strictly requires specific network connectivity.
Neural representations of space in the hippocampus of a food-caching bird
Spatial memory in vertebrates requires brain regions homologous to the mammalian hippocampus. Between vertebrate clades, however, these regions are anatomically distinct and appear to produce different spatial patterns of neural activity. We asked whether hippocampal activity is fundamentally different even between distant vertebrates that share a strong dependence on spatial memory. We studied tufted titmice – food-caching birds capable of remembering many concealed food locations. We found mammalian-like neural activity in the titmouse hippocampus, including sharp-wave ripples and anatomically organized place cells. In a non-food-caching bird species, spatial firing was less informative and was exhibited by fewer neurons. These findings suggest that hippocampal circuit mechanisms are similar between birds and mammals, but that the resulting patterns of activity may vary quantitatively with species-specific ethological needs.
Being awake while sleeping, being asleep while awake: consequences on cognition and consciousness
Sleep is classically presented as an all-or-nothing phenomenon. Yet, there is increasing evidence showing that sleep and wakefulness can actually intermingle and that wake-like and sleep-like activity can be observed concomitantly in different brain regions. I will here explore the implications of this conception of sleep as a local phenomenon for cognition and consciousness. In the first part of my presentation, I will show how local modulations of sleep depth during sleep could support the processing of sensory information by sleepers. I will also how, under certain circumstances, sleepers can learn while sleeping but also how they can forget. In the second part, I will show how the reverse phenomenon, sleep intrusions during waking, can explain modulations of attention. I will focus in particular on modulations of subjective experience and how the local sleep framework can inform our understanding of everyday phenomena such as mind wandering and mind blanking. Through this presentation and the exploration of both sleep and wakefulness, I will seek to connect changes in neurophysiology with changes in behaviour and subjective experience.
Neural mechanisms of altered states of consciousness under psychedelics
Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.
Large-scale approaches for distributed circuits underlying visual decision-making
Mammalian vision and visually-guided behavior relies on neurons distributed across diverse brain regions. In this talk I will describe our efforts to create tools that allow us to measure activity from these distributed circuits - Neuropixels probes for large-scale electrophysiology - and our findings from studies deploying these tools to study visual detection and discrimination in mice.
Removing information from working memory
Holding information in working memory is essential for cognition, but removing unwanted thoughts is equally important. There is great flexibility in how we can manipulate information in working memory, but the processes and consequences of these operations are poorly understood. In this talk I will discuss our recent findings using multivariate pattern analyses of fMRI brain data to demonstrate the successful removal of information from working memory using three different strategies: suppressing a specific thought, replacing a thought with a different one, and clearing the mind of all thought. These strategies are supported by distinct brain regions and have differential consequences on the encoding of new information. I will discuss implications of these results on theories of memory and I will highlight some new directions involving the use of real-time neurofeedback to investigate causal links between brain and behavior.
An Ideal Cortical Map: Towards a multi-dimensional account of cortical organisation
Von Economo stated that an "Ideal Cortical Map" would look very different to a parcellation. He suggested that an Ideal Cortical Map would involve the superimposition of many different cortical maps, with changes in each map shown at every single point. In line with this idea, I will discuss our recent research on identifying principal dimensions of cortical differentiation. In particular, I will highlight large-scale patterns of cytoarchitectural differentiation that can be observed using post mortem histology or in vivo microstructure-sensitive MRI. I aim to show how this approach provides a cohesive framework to understand cortical organisation across multiple biological scales. This allows us to formulate new ideas on the organisation and function of the brain regions (eg: mesiotemporal lobe), networks (eg: DMN) and the whole cortex.
In search of me: a theoretical approach to identify the neural substrate of consciousness
A major neuroscientific challenge is to identify the neural mechanisms that support consciousness. Though experimental studies have accumulated evidence about the location of the neural substrate of consciousness, we still lack a full understanding of why certain brain areas, but not others, can support consciousness. In this talk, I will give an overview of our approach, taking advantage of the theoretical framework provided by Integrated Information Theory (IIT). First, I will introduce results showing that a maximum of integrated information within the human brain matches our best evidence concerning the location of the NSC, supporting the IIT’s prediction. Furthermore, I will discuss the possibility that the NSC can change its location and even split into two depending on the task demand. Finally, based on some graph-theoretical analyses, I will argue that the ability of different brain regions to contribute or not to consciousness depends on specific properties of their anatomical connectivity, which determines their ability to support high integrated information.
Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning
Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.
A brain circuit for curiosity
Motivational drives are internal states that can be different even in similar interactions with external stimuli. Curiosity as the motivational drive for novelty-seeking and investigating the surrounding environment is for survival as essential and intrinsic as hunger. Curiosity, hunger, and appetitive aggression drive three different goal-directed behaviors—novelty seeking, food eating, and hunting— but these behaviors are composed of similar actions in animals. This similarity of actions has made it challenging to study novelty seeking and distinguish it from eating and hunting in nonarticulating animals. The brain mechanisms underlying this basic survival drive, curiosity, and novelty-seeking behavior have remained unclear. In spite of having well-developed techniques to study mouse brain circuits, there are many controversial and different results in the field of motivational behavior. This has left the functions of motivational brain regions such as the zona incerta (ZI) still uncertain. Not having a transparent, nonreinforced, and easily replicable paradigm is one of the main causes of this uncertainty. Therefore, we chose a simple solution to conduct our research: giving the mouse freedom to choose what it wants—double freeaccess choice. By examining mice in an experimental battery of object free-access double-choice (FADC) and social interaction tests—using optogenetics, chemogenetics, calcium fiber photometry, multichannel recording electrophysiology, and multicolor mRNA in situ hybridization—we uncovered a cell type–specific cortico-subcortical brain circuit of the curiosity and novelty-seeking behavior. We found in mice that inhibitory neurons in the medial ZI (ZIm) are essential for the decision to investigate an object or a conspecific. These neurons receive excitatory input from the prelimbic cortex to signal the initiation of exploration. This signal is modulated in the ZIm by the level of investigatory motivation. Increased activity in the ZIm instigates deep investigative action by inhibiting the periaqueductal gray region. A subpopulation of inhibitory ZIm neurons expressing tachykinin 1 (TAC1) modulates the investigatory behavior.
Using extra-hippocampal cognitive maps for goal-directed spatial navigation
Goal-directed navigation requires precise estimates of spatial relationships between current position and future goal, as well as planning of an associated route or action. While neurons in the hippocampal formation can represent the animal’s position and nearby trajectories, their role in determining the animal’s destination or action has been questioned. We thus hypothesize that brain regions outside the hippocampal formation may play complementary roles in navigation, particularly for guiding goal-directed behaviours based on the brain’s internal cognitive map. In this seminar, I will first describe a subpopulation of neurons in the retrosplenial cortex (RSC) that increase their firing when the animal approaches environmental boundaries, such as walls or edges. This boundary coding is independent of direct visual or tactile sensation but instead depends on inputs from the medial entorhinal cortex (MEC) that contains spatial tuning cells, such as grid cells or border cells. However, unlike MEC border cells, we found that RSC border cells encode environmental boundaries in a self-centred egocentric coordinate frame, which may allow an animal for efficient avoidance from approaching walls or edges during navigation. I will then discuss whether the brain can possess a precise estimate of remote target location during active environmental exploration. Such a spatial code has not been described in the hippocampal formation. However, we found that neurons in the rat orbitofrontal cortex (OFC) form spatial representations that persistently point to the animal’s subsequent goal destination throughout navigation. This destination coding emerges before navigation onset without direct sensory access to a distal goal, and are maintained via destination-specific neural ensemble dynamics. These findings together suggest key roles for extra-hippocampal regions in spatial navigation, enabling animals to choose appropriate actions toward a desired destination by avoiding possible dangers.
Deciding to stop deciding: A cortical-subcortical circuit for forming and terminating a decision
The neurobiology of decision-making is informed by neurons capable of representing information over time scales of seconds. Such neurons were initially characterized in studies of spatial working memory, motor planning (e.g., Richard Andersen lab) and spatial attention. For decision-making, such neurons emit graded spike rates, that represent the accumulated evidence for or against a choice. They establish the conduit between the formation of the decision and its completion, usually in the form of a commitment to an action, even if provisional. Indeed, many decisions appear to arise through an accumulation of noisy samples of evidence to a terminating threshold, or bound. Previous studies show that single neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence when monkeys make decisions about the direction of random dot motion (RDM) and express their decision with a saccade to the neuron’s preferred target. The mechanism of termination (the bound) is elusive. LIP is interconnected with other brain regions that also display decision-related activity. Whether these areas play roles in the decision process that are similar to or fundamentally different from that of LIP is unclear. I will present new unpublished experiments that begin to resolve these issues by recording from populations of neurons simultaneously in LIP and one of its primary targets, the superior colliculus (SC), while monkeys make difficult perceptual decisions.
Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖
Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.
How Brain Circuits Function in Health and Disease: Understanding Brain-wide Current Flow
Dr. Rajan and her lab design neural network models based on experimental data, and reverse-engineer them to figure out how brain circuits function in health and disease. They recently developed a powerful framework for tracing neural paths across multiple brain regions— called Current-Based Decomposition (CURBD). This new approach enables the computation of excitatory and inhibitory input currents that drive a given neuron, aiding in the discovery of how entire populations of neurons behave across multiple interacting brain regions. Dr. Rajan’s team has applied this method to studying the neural underpinnings of behavior. As an example, when CURBD was applied to data gathered from an animal model often used to study depression- and anxiety-like behaviors (i.e., learned helplessness) the underlying biology driving adaptive and maladaptive behaviors in the face of stress was revealed. With this framework Dr. Rajan's team probes for mechanisms at work across brain regions that support both healthy and disease states-- as well as identify key divergences from multiple different nervous systems, including zebrafish, mice, non-human primates, and humans.
Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models
The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.
Inferring brain-wide interactions using data-constrained recurrent neural network models
Behavior arises from the coordinated activity of numerous distinct brain regions. Modern experimental tools allow access to neural populations brain-wide, yet understanding such large-scale datasets necessitates scalable computational models to extract meaningful features of inter-region communication. In this talk, I will introduce Current-Based Decomposition (CURBD), an approach for inferring multi-region interactions using data-constrained recurrent neural network models. I will first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. I will then apply CURBD to understand the brain-wide flow of information leading to behavioral state transitions in larval zebrafish. These examples will establish CURBD as a flexible, scalable framework to infer brain-wide interactions that are inaccessible from experimental measurements alone.
Hebbian learning, its inference, and brain oscillation
Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.
Mechanisms of cortical communication during decision-making
Regulation of information flow in the brain is critical for many forms of behavior. In the process of sensory based decision-making, decisions about future actions are held in memory until enacted, making them potentially vulnerable to distracting sensory input. Therefore, gating of information flow from sensory to motor areas could protect memory from interference during decision-making, but the underlying network mechanisms are not understood. I will present our recent experimental and modeling work describing how information flow from the sensory cortex can be gated by state-dependent frontal cortex dynamics during decision-making in mice. Our results show that communication between brain regions can be regulated via attractor dynamics, which control the degree of commitment to an action, and reveal a novel mechanism of gating of neural information.
Deciphering the Dynamics of the Unconscious Brain Under General Anesthesia
General anesthesia is a drug-induced, reversible condition comprised of five behavioral states: unconsciousness, amnesia (loss of memory), antinociception (loss of pain sensation), akinesia (immobility), and hemodynamic stability with control of the stress response. Our work shows that a primary mechanism through which anesthetics create these altered states of arousal is by initiating and maintaining highly structured oscillations. These oscillations impair communication among brain regions. We illustrate this effect by presenting findings from our human studies of general anesthesia using high-density EEG recordings and intracranial recordings. These studies have allowed us to give a detailed characterization of the neurophysiology of loss and recovery of consciousness due to propofol. We show how these dynamics change systematically with different anesthetic classes and with age. As a consequence, we have developed a principled, neuroscience-based paradigm for using the EEG to monitor the brain states of patients receiving general anesthesia. We demonstrate that the state of general anesthesia can be rapidly reversed by activating specific brain circuits. Finally, we demonstrate that the state of general anesthesia can be controlled using closed loop feedback control systems. The success of our research has depended critically on tight coupling of experiments, signal processing research and mathematical modeling.
Receptor Costs Determine Retinal Design
Our group is interested in discovering design principles that govern the structure and function of neurons and neural circuits. We record from well-defined neurons, mainly in flies’ visual systems, to measure the molecular and cellular factors that determine relevant measures of performance, such as representational capacity, dynamic range and accuracy. We combine this empirical approach with modelling to see how the basic elements of neural systems (ion channels, second messengers systems, membranes, synapses, neurons, circuits and codes) combine to determine performance. We are investigating four general problems. How are circuits designed to integrate information efficiently? How do sensory adaptation and synaptic plasticity contribute to efficiency? How do the sizes of neurons and networks relate to energy consumption and representational capacity? To what extent have energy costs shaped neurons, sense organs and brain regions during evolution?
How Memory Guides Value-Based Decisions
From robots to humans, the ability to learn from experience turns a rigid response system into a flexible, adaptive one. In this talk, I will discuss emerging findings regarding the neural and cognitive mechanisms by which learning shapes decisions. The lecture will focus on how multiple brain regions interact to support learning, what this means for how memories are built, and the consequences for how decisions are made. Results emerging from this work challenge the traditional view of separate learning systems and advance understanding of how memory biases decisions in both adaptive and maladaptive ways.
Targeting the synapse in Alzheimer’s Disease
Alzheimer’s Disease is characterised by the accumulation of misfolded proteins, namely amyloid and tau, however it is synapse loss which leads to the cognitive impairments associated with the disease. Many studies have focussed on single time points to determine the effects of pathology on synapses however this does not inform on the plasticity of the synapses, that is how they behave in vivo as the pathology progresses. Here we used in vivo two-photon microscopy to assess the temporal dynamics of axonal boutons and dendritic spines in mouse models of tauopathy[1] (rTg4510) and amyloidopathy[2] (J20). This revealed that pre- and post-synaptic components are differentially affected in both AD models in response to pathology. In the Tg4510 model, differences in the stability and turnover of axonal boutons and dendritic spines immediately prior to neurite degeneration was revealed. Moreover, the dystrophic neurites could be partially rescued by transgene suppression. Understanding the imbalance in the response of pre- and post-synaptic components is crucial for drug discovery studies targeting the synapse in Alzheimer’s Disease. To investigate how sub-types of synapses are affected in human tissue, the Multi-‘omics Atlas Project, a UKDRI initiative to comprehensively map the pathology in human AD, will determine the synaptome changes using imaging and synaptic proteomics in human post mortem AD tissue. The use of multiple brain regions and multiple stages of disease will enable a pseudotemporal profile of pathology and the associated synapse alterations to be determined. These data will be compared to data from preclinical models to determine the functional implications of the human findings, to better inform preclinical drug discovery studies and to develop a therapeutic strategy to target synapses in Alzheimer’s Disease[3].
From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories
Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.
The Role of Hippocampal Replay in Memory Consolidation
The hippocampus lies at the centre of a network of brain regions thought to support spatial and episodic memory. Place cells - the principal cell of the hippocampus, represent information about an animal’s spatial location. Yet, during rest and awake quiescence place cells spontaneously recapitulate past trajectories (‘replay’). Replay has been hypothesised to support systems consolidation – the stabilisation of new memories via maturation of complementary cortical memory traces. Indeed, in recent work we found place and grid cells, from the deep medial entorhinal cortex (dMEC, the principal cortical output region of the hippocampus), replayed coherently during rest periods. Importantly, dMEC grid cells lagged place cells by ~11ms; suggesting the coordination may reflect consolidation. Moreover, preliminary data shows that the dMEC-hippocampal coordination strengthens as an animal becomes familiar with a task and that it may be led by directionally modulated cells. Finally, on-going work, in my recently established lab, shows replay may represent the mechanism underlying the maturation of episodic/spatial memory in pre-weanling pups. Together, these results indicate replay may play a central role in ensuring the permanency of memories.
Recurrent network models of adaptive and maladaptive learning
During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.
Identifying state-dependent interactions between brain regions during decision making
COSYNE 2023
Neural Population Geometry across model scale: A tool for cross-species functional comparison of visual brain regions
COSYNE 2023
Differential computations across multiple brain regions underlying dexterous movements
COSYNE 2025
Distributed engrams enable parallel memory generalization and discrimination across brain regions
COSYNE 2025
Combined expansion and STED microscopy reveals fingerprints of synaptic nanostructure across brain regions and in ASD-related SHANK3 deficiency
FENS Forum 2024
Common brain regions implicated in supramodal decision formation across visual, auditory, and motor-independent tasks
FENS Forum 2024
Employing CBF-SPECT imaging to examine the activation of brain regions across different stages of helping behavior in mice
FENS Forum 2024
Expression analysis of the glycine receptor subunit alpha 3 in projection neurons across brain regions
FENS Forum 2024
Neuronal encoding of threat is shaped by auditory properties of predictive cues across limbic brain regions
FENS Forum 2024
Non-dividing “immature” neurons in subcortical brain regions of mammals display phylogenetic variation with clear prevalence in primates
FENS Forum 2024
Probing the neuromodulatory effect of SSRIs on serotonin release across brain regions with improved iSeroSnFR
FENS Forum 2024
SHANK3 deficiency leads to GABAergic abnormalities and morphological changes in somatostatin-expressing interneurons in olfactory brain regions
FENS Forum 2024
Unveiling astrocyte dynamics in Parkinson's disease: Insights from single-nucleus profiling across brain regions
FENS Forum 2024