Modularity
modularity
“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”
Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Modularity and Robustness of Frontal Cortical Networks
Nuo Li (Baylor College of Medicine, USA) shares novel insights into coordinated interhemispheric large-scale neural network activity underpinning short-term memory in mice. Relevant techniques covered include: simultaneous multi-regional recordings using multiple 64-channel H probes during head-fixed behavior in mice. simultaneous optogenetics and population recording. analysis of population recordings to infer interactions between brain regions. Reference: Chen G, Kang B, Lindsey J, Druckmann S, Li N, (2021). Modularity and robustness of frontal cortex networks. Cell, 184(14):3717-3730.
Flexible motor sequence generation by thalamic control of cortical dynamics through low-rank connectivity perturbations
One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales to efficiently shape structured behaviors. I will present a model that clarifies how these complex computations could be performed in the mammalian brain, with an emphasis on the learning of an extendable library of autonomous motor motifs and the flexible stringing of these motifs in motor sequences. To build this model, we took advantage of the fact that the anatomy of the circuits involved is well known. Our results show how these architectural constraints lead to a principled understanding of how strategically positioned plastic connections located within motif-specific thalamocortical loops can interact with cortical dynamics that are shared across motifs to create an efficient form of modularity. This occurs because the cortical dynamics can be controlled by the activation of as few as one thalamic unit, which induces a low-rank perturbation of the cortical connectivity, and significantly expands the range of outputs that the network can produce. Finally, our results show that transitions between any motifs can be facilitated by a specific thalamic population that participates in preparing cortex for the execution of the next motif. Taken together, our model sheds light on the neural network mechanisms that can generate flexible sequencing of varied motor motifs.
NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation
A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.
Representation transfer and signal denoising through topographic modularity
To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
Self-organized formation of discrete grid cell modules from smooth gradients
Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.
Modularity of attractors in inhibition-dominated TLNs
Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. In particular, we have discovered that a special type of unstable fixed points, corresponding to "core motifs," are predictive of dynamic attractors. Recently, we have used these ideas to classify dynamic attractors in a two-parameter family of inhibition-dominated TLNs spanning all 9608 directed graphs of size n=5. Remarkably, we find a striking modularity in the dynamic attractors, with identical or near-identical attractors arising in networks that are otherwise dynamically inequivalent. This suggests that, just as one can store multiple static patterns as stable fixed points in a Hopfield model, a variety of dynamic attractors can also be embedded in a TLN in a modular fashion.
Modularity of the human connectome enables dual attentional modes by frustrating synchronization
Bernstein Conference 2024
Hierarchical modularity in Drosophila brain reveals novel organizational principles
COSYNE 2022
Hierarchical modularity in Drosophila brain reveals novel organizational principles
COSYNE 2022
Modularity emerges in neural networks trained to perform context-dependent behavior
COSYNE 2023
Computation with neuronal cultures: Effects of connectivity modularity on response separation and generalisation in simulations and experiments
FENS Forum 2024