Circuit Dynamics
circuit dynamics
German Sumbre
Postdoc position in Paris to study neural circuit dynamics and behaviour in cavefish The Mexican tetra, Astyanax mexicanus is a leading model for studying genetic mechanisms underlying trait evolution. A. mexicanus consists of a surface (river) and several cave populations that independently evolved in largely isolated caves, allowing for comparative approaches to identify genetic and neural variants associated with behavioral evolution. Cave populations of A. mexicanus exhibit prominent changes in sensory systems including loss of vision and expansion of smell, taste, mechanosensation and lateral line. Despite the robust changes in behavior and morphology, the shifts in processing sensory information within the brain have been unexplored. The Sumbre lab at the Ecole Normale Superieure, Paris, France is looking for a postdoc to study the evolution of brain processes and computations. For this purpose, we are using transgenic fish expressing GCaMP in combination with light-sheet microscopy to monitor the activity of the whole brain, with single-neuron resolution in an intact, behaving larvae. We are studying the differences in sensory processing (audition/vocalization, taste, lateral line, somatosensory and olfaction) between the surface and cavefish, to shed light on principles underlying the evolution of sensory systems. The lab is located at the Ecole normale supérieure, paris, France. www.ibens.ens.fr *For the postdoc position, it is necessary to have good programming skills, and some background in neuroscience. For more information you can contact Germán Sumbre sumbre@ens.fr www.zebrain.biologie.ens.fr
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Combined electrophysiological and optical recording of multi-scale neural circuit dynamics
This webinar will showcase new approaches for electrophysiological recordings using our silicon neural probes and surface arrays combined with diverse optical methods such as wide-field or 2-photon imaging, fiber photometry, and optogenetic perturbations in awake, behaving mice. Multi-modal recording of single units and local field potentials across cortex, hippocampus and thalamus alongside calcium activity via GCaMP6F in cortical neurons in triple-transgenic animals or in hippocampal astrocytes via viral transduction are brought to bear to reveal hitherto inaccessible and under-appreciated aspects of coordinated dynamics in the brain.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Optimal information loading into working memory in prefrontal cortex
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.
Heterogeneity and non-random connectivity in reservoir computing
Reservoir computing is a promising framework to study cortical computation, as it is based on continuous, online processing and the requirements and operating principles are compatible with cortical circuit dynamics. However, the framework has issues that limit its scope as a generic model for cortical processing. The most obvious of these is that, in traditional models, learning is restricted to the output projections and takes place in a fully supervised manner. If such an output layer is interpreted at face value as downstream computation, this is biologically questionable. If it is interpreted merely as a demonstration that the network can accurately represent the information, this immediately raises the question of what would be biologically plausible mechanisms for transmitting the information represented by a reservoir and incorporating it in downstream computations. Another major issue is that we have as yet only modest insight into how the structural and dynamical features of a network influence its computational capacity, which is necessary not only for gaining an understanding of those features in biological brains, but also for exploiting reservoir computing as a neuromorphic application. In this talk, I will first demonstrate a method for quantifying the representational capacity of reservoirs without training them on tasks. Based on this technique, which allows systematic comparison of systems, I then present our recent work towards understanding the roles of heterogeneity and connectivity patterns in enhancing both the computational properties of a network and its ability to reliably transmit to downstream networks. Finally, I will give a brief taster of our current efforts to apply the reservoir computing framework to magnetic systems as an approach to neuromorphic computing.
Malignant synaptic plasticity in pediatric high-grade gliomas
Pediatric high-grade gliomas (pHGG) are a devastating group of diseases that urgently require novel therapeutic options. We have previously demonstrated that pHGGs directly synapse onto neurons and the subsequent tumor cell depolarization, mediated by calcium-permeable AMPA channels, promotes their proliferation. The regulatory mechanisms governing these postsynaptic connections are unknown. Here, we investigated the role of BDNF-TrkB signaling in modulating the plasticity of the malignant synapse. BDNF ligand activation of its canonical receptor, TrkB (which is encoded for by the gene NTRK2), has been shown to be one important modulator of synaptic regulation in the normal setting. Electrophysiological recordings of glioma cell membrane properties, in response to acute neurotransmitter stimulation, demonstrate in an inward current resembling AMPA receptor (AMPAR) mediated excitatory neurotransmission. Extracellular BDNF increases the amplitude of this glutamate-induced tumor cell depolarization and this effect is abrogated in NTRK2 knockout glioma cells. Upon examining tumor cell excitability using in situ calcium imaging, we found that BDNF increases the intensity of glutamate-evoked calcium transients in GCaMP6s expressing glioma cells. Western blot analysis indicates the tumors AMPAR properties are altered downstream of BDNF induced TrkB activation in glioma. Cell membrane protein capture (via biotinylation) and live imaging of pH sensitive GFP-tagged AMPAR subunits demonstrate an increase of calcium permeable channels at the tumors postsynaptic membrane in response to BDNF. We find that BDNF-TrkB signaling promotes neuron-to-glioma synaptogenesis as measured by high-resolution confocal and electron microscopy in culture and tumor xenografts. Our analysis of published pHGG transcriptomic datasets, together with brain slice conditioned medium experiments in culture, indicates the tumor microenvironment as the chief source of BDNF ligand. Disruption of the BDNF-TrkB pathway in patient-derived orthotopic glioma xenograft models, both genetically and pharmacologically, results in an increased overall survival and reduced tumor proliferation rate. These findings suggest that gliomas leverage normal mechanisms of plasticity to modulate the excitatory channels involved in synaptic neurotransmission and they reveal the potential to target the regulatory components of glioma circuit dynamics as a therapeutic strategy for these lethal cancers.
Brain circuit dynamics in Action and Sleep
Our group focuses on brain computation, physiology and evolution, with a particular focus on network dynamics, sleep (evolution and mechanistic underpinnings), cortical computation (through the study of ancestral cortices), and sensorimotor processing. This talk will describe our recent results on the remarkable camouflage behavior of cuttlefish (action) and on brain activity in REM and NonREM in lizards (sleep). Both topics will focus on aspects of circuit dynamics.
Brain circuit dynamics in Action and Sleep
Our group focuses on brain computation, physiology and evolution, with a particular focus on network dynamics, sleep (evolution and mechanistic underpinnings), cortical computation (through the study of ancestral cortices), and sensorimotor processing. This talk will describe our recent results on the remarkable camouflage behavior of cuttlefish (action) and on brain activity in REM and NonREM in lizards (sleep). Both topics will focus on aspects of circuit dynamics.
Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Restless engrams: the origin of continually reconfiguring neural representations
During learning, populations of neurons alter their connectivity and activity patterns, enabling the brain to construct a model of the external world. Conventional wisdom holds that the durability of a such a model is reflected in the stability of neural responses and the stability of synaptic connections that form memory engrams. However, recent experimental findings have challenged this idea, revealing that neural population activity in circuits involved in sensory perception, motor planning and spatial memory continually change over time during familiar behavioural tasks. This continual change suggests significant redundancy in neural representations, with many circuit configurations providing equivalent function. I will describe recent work that explores the consequences of such redundancy for learning and for task representation. Despite large changes in neural activity, we find cortical responses in sensorimotor tasks admit a relatively stable readout at the population level. Furthermore, we find that redundancy in circuit connectivity can make a task easier to learn and compensate for deficiencies in biological learning rules. Finally, if neuronal connections are subject to an unavoidable level of turnover, the level of plasticity required to optimally maintain a memory is generally lower than the total change due to turnover itself, predicting continual reconfiguration of an engram.
The emergence and modulation of time in neural circuits and behavior
Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior and cortical activity. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times can be accelerated or slowed down via gain modulation, flexibly induced by neuromodulation or perturbations; and how gain modulation may control response timing in the visual cortex. Finally, we will present a new biologically plausible way to generate a reservoir of multiple timescales in cortical circuits.
The emergence and modulation of time in neural circuits and behavior
Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.
Dimensions of variability in circuit models of cortex
Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.
Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus
The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.
Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals
COSYNE 2022
Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals
COSYNE 2022
Social cues modulate circuit dynamics to control the choice between communication signals in flies
COSYNE 2022
Social cues modulate circuit dynamics to control the choice between communication signals in flies
COSYNE 2022
Neural-astrocyte interaction enables contextually guided circuit dynamics
COSYNE 2023
Investigating prefronto-striatal circuit dynamics during flexible decision-making
FENS Forum 2024
Unraveling the impact of architectural complexity on cortical-hippocampal circuit dynamics with brain-on-a-chip models
FENS Forum 2024