Modules
modules
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Towards multi-system network models for cognitive neuroscience
Artificial neural networks can be useful for studying brain functions. In cognitive neuroscience, recurrent neural networks are often used to model cognitive functions. I will first offer my opinion on what is missing in the classical use of recurrent neural networks. Then I will discuss two lines of ongoing efforts in our group to move beyond the classical recurrent neural networks by studying multi-system neural networks (the talk will focus on two-system networks). These are networks that combine modules for several neural systems, such as vision, audition, prefrontal, hippocampal systems. I will showcase how multi-system networks can potentially be constrained by experimental data in fundamental ways and at scale.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
What the fly’s eye tells the fly’s brain…and beyond
Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.
Mapping the Dynamics of the Linear and 3D Genome of Single Cells in the Developing Brain
Three intimately related dimensions of the mammalian genome—linear DNA sequence, gene transcription, and 3D genome architecture—are crucial for the development of nervous systems. Changes in the linear genome (e.g., de novo mutations), transcriptome, and 3D genome structure lead to debilitating neurodevelopmental disorders, such as autism and schizophrenia. However, current technologies and data are severely limited: (1) 3D genome structures of single brain cells have not been solved; (2) little is known about the dynamics of single-cell transcriptome and 3D genome after birth; (3) true de novo mutations are extremely difficult to distinguish from false positives (DNA damage and/or amplification errors). Here, I filled in this longstanding technological and knowledge gap. I recently developed a high-resolution method—diploid chromatin conformation capture (Dip-C)—which resolved the first 3D structure of the human genome, tackling a longstanding problem dating back to the 1880s. Using Dip-C, I obtained the first 3D genome structure of a single brain cell, and created the first transcriptome and 3D genome atlas of the mouse brain during postnatal development. I found that in adults, 3D genome “structure types” delineate all major cell types, with high correlation between chromatin A/B compartments and gene expression. During development, both transcriptome and 3D genome are extensively transformed in the first month of life. In neurons, 3D genome is rewired across scales, correlated with gene expression modules, and independent of sensory experience. Finally, I examined allele-specific structure of imprinted genes, revealing local and chromosome-wide differences. More recently, I expanded my 3D genome atlas to the human and mouse cerebellum—the most consistently affected brain region in autism. I uncovered unique 3D genome rewiring throughout life, providing a structural basis for the cerebellum’s unique mode of development and aging. In addition, to accurately measure de novo mutations in a single cell, I developed a new method—multiplex end-tagging amplification of complementary strands (META-CS), which eliminates nearly all false positives by virtue of DNA complementarity. Using META-CS, I determined the true mutation spectrum of single human brain cells, free from chemical artifacts. Together, my findings uncovered an unknown dimension of neurodevelopment, and open up opportunities for new treatments for autism and other developmental disorders.
Self-organized formation of discrete grid cell modules from smooth gradients
Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.
Space wrapped onto a grid cell torus
Entorhinal grid cells, so-called because of their hexagonally tiled spatial receptive fields, are organized in modules which, collectively, are believed to form a population code for the animal’s position. Here, we apply topological data analysis to simultaneous recordings of hundreds of grid cells and show that joint activity of grid cells within a module lies on a toroidal manifold. Each position of the animal in its physical environment corresponds to a single location on the torus, and each grid cell is preferentially active within a single “field” on the torus. Toroidal firing positions persist between environments, and between wakefulness and sleep, in agreement with continuous attractor models of grid cells.
SimBA for Behavioral Neuroscientists
Several excellent computational frameworks exist that enable high-throughput and consistent tracking of freely moving unmarked animals. SimBA introduce and distribute a plug-and play pipeline that enables users to use these pose-estimation approaches in combination with behavioral annotation for the generation of supervised machine-learning behavioral predictive classifiers. SimBA was developed for the analysis of complex social behaviors, but includes the flexibility for users to generate predictive classifiers across other behavioral modalities with minimal effort and no specialized computational background. SimBA has a variety of extended functions for large scale batch video pre-processing, generating descriptive statistics from movement features, and interactive modules for user-defined regions of interest and visualizing classification probabilities and movement patterns.
Herbert Jasper Lecture
There is a long-standing tension between the notion that the hippocampal formation is essentially a spatial mapping system, and the notion that it plays an essential role in the establishment of episodic memory and the consolidation of such memory into structured knowledge about the world. One theory that resolves this tension is the notion that the hippocampus generates rather arbitrary 'index' codes that serve initially to link attributes of episodic memories that are stored in widely dispersed and only weakly connected neocortical modules. I will show how an essentially 'spatial' coding mechanism, with some tweaks, provides an ideal indexing system and discuss the neural coding strategies that the hippocampus apparently uses to overcome some biological constraints affecting the possibility of shipping the index code out widely to the neocortex. Finally, I will present new data suggesting that the hippocampal index code is indeed transferred to layer II-III of the neocortex.
Organization and control of hippocampal circuits in epilepsy
Basket cells are key GABAergic inhibitory interneurons that target the somata and proximal dendrites, enabling efficient control of the timing and rate of spiking of their postsynaptic targets. In all cortical circuits, there are two major types of basket cell that exhibit striking developmental, molecular, anatomical, and physiological differences. In this talk, I will discuss recent results that reveal the tightly coupled complementarity of these two key microcircuit regulatory modules, demonstrating a novel form of brain-state-specific segregation of inhibition during spontaneous behavior, with implications for the assessment of dysregulated inhibition in epilepsy. In addition, I will describe recent advances in our understanding of the spatio-temporal dynamics of endocannabinoid signaling in hippocampal circuits and discuss how abnormal amplification of these activity-dependent signaling processes leads to surprising downstream effects in seizures.
Linking neural representations of space by multiple attractor networks in the entorhinal cortex and the hippocampus
In the past decade evidence has accumulated in favor of the hypothesis that multiple sub-networks in the medial entorhinal cortex (MEC) are characterized by low-dimensional, continuous attractor dynamics. Much has been learned about the joint activity of grid cells within a module (a module consists of grid cells that share a common grid spacing), but little is known about the interactions between them. Under typical conditions of spatial exploration in which sensory cues are abundant, all grid-cells in the MEC represent the animal’s position in space and their joint activity lies on a two-dimensional manifold. However, if the grid cells in a single module mechanistically constitute independent attractor networks, then under conditions in which salient sensory cues are absent, errors could accumulate in the different modules in an uncoordinated manner. Such uncoordinated errors would give rise to catastrophic readout errors when attempting to decode position from the joint grid-cell activity. I will discuss recent theoretical works from our group, in which we explored different mechanisms that could impose coordination in the different modules. One of these mechanisms involves coordination with the hippocampus and must be set up such that it operates across multiple spatial maps that represent different environments. The other mechanism is internal to the entorhinal cortex and independent of the hippocampus.
Parallel ascending spinal pathways for affective touch and pain
Each day we experience myriad somatosensory stimuli: hugs from loved ones, warm showers, a mosquito bite, and sore muscles after a workout. These tactile, thermal, itch, and nociceptive signals are detected by peripheral sensory neuron terminals distributed throughout our body, propagated into the spinal cord, and then transmitted to the brain through ascending spinal pathways. Primary sensory neurons that detect a wide range of somatosensory stimuli have been identified and characterized. In contrast, very little is known about how peripheral signals are integrated and processed within the spinal cord and conveyed to the brain to generate somatosensory perception and behavioral responses. We tackled this question by developing new mouse genetic tools to define projection neuron (PN) subsets of the anterolateral pathway, a major ascending spinal cord pathway, and combining these new tools with advanced anatomical, physiological, and behavioral approaches. We found that Gpr83+ PNs, a newly identified subset of spinal cord output neurons, and Tacr1+ PNs are largely non-overlapping populations that innervate distinct sets of subnuclei within the lateral parabrachial nucleus (PBNL) of the pons in a zonally segregated manner. In addition, Gpr83+ PNs are highly sensitive to cutaneous mechanical stimuli, receive strong synaptic inputs from primary mechanosensory neurons, and convey tactile information bilaterally to the PBNL in a non-topographically organized manner. Remarkably, Gpr83+ mechanosensory limb of the anterolateral pathway controls behaviors associated with different hedonic values (appetitive or aversive) in a scalable manner. This is the first study to identify a dedicated spinal cord output pathway that conveys affective touch signals to the brain and to define parallel ascending circuit modules that cooperate to convey tactile, thermal and noxious cutaneous signals from the spinal cord to the brain. This study has also revealed exciting new therapeutic opportunities for developing treatments for neurological disorders associated with pain and affective touch.
What can we further learn from the brain for artificial intelligence?
Deep learning is a prime example of how brain-inspired computing can benefit development of artificial intelligence. But what else can we learn from the brain for bringing AI and robotics to the next level? Energy efficiency and data efficiency are the major features of the brain and human cognition that today’s deep learning has yet to deliver. The brain can be seen as a multi-agent system of heterogeneous learners using different representations and algorithms. The flexible use of reactive, model-free control and model-based “mental simulation” appears to be the basis for computational and data efficiency of the brain. How the brain efficiently acquires and flexibly combines prediction and control modules is a major open problem in neuroscience and its solution should help developments of more flexible and autonomous AI and robotics.
Dynamics of specialization in neural modules under resource constraints
Bernstein Conference 2024
Rostro-caudal control of locomotor steering strategies by V2a reticulospinal modules
FENS Forum 2024
ModuleXplore: A user-friendly Shiny application to compare gene co-expression modules within and across transcriptomic datasets
Neuromatch 5