Attractors
attractors
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
The Secret Bayesian Life of Ring Attractor Networks
Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Towards model-based control of active matter: active nematics and oscillator networks
The richness of active matter's spatiotemporal patterns continues to capture our imagination. Shaping these emergent dynamics into pre-determined forms of our choosing is a grand challenge in the field. To complicate matters, multiple dynamical attractors can coexist in such systems, leading to initial condition-dependent dynamics. Consequently, non-trivial spatiotemporal inputs are generally needed to access these states. Optimal control theory provides a general framework for identifying such inputs and represents a promising computational tool for guiding experiments and interacting with various systems in soft active matter and biology. As an exemplar, I first consider an extensile active nematic fluid confined to a disk. In the absence of control, the system produces two topological defects that perpetually circulate. Optimal control identifies a time-varying active stress field that restructures the director field, flipping the system to its other attractor that rotates in the opposite direction. As a second, analogous case, I examine a small network of coupled Belousov-Zhabotinsky chemical oscillators that possesses two dominant attractors, two wave states of opposing chirality. Optimal control similarly achieves the task of attractor switching. I conclude with a few forward-looking remarks on how the same model-based control approach might come to bear on problems in biology.
Nonequilibrium self-assembly and time-irreversibility in living systems
Far-from-equilibrium processes constantly dissipate energy while converting a free-energy source to another form of energy. Living systems, for example, rely on an orchestra of molecular motors that consume chemical fuel to produce mechanical work. In this talk, I will describe two features of life, namely, time-irreversibility, and nonequilibrium self-assembly. Time irreversibility is the hallmark of nonequilibrium dissipative processes. Detecting dissipation is essential for our basic understanding of the underlying physical mechanism, however, it remains a challenge in the absence of observable directed motion, flows, or fluxes. Additional difficulty arises in complex systems where many internal degrees of freedom are inaccessible to an external observer. I will introduce a novel approach to detect time irreversibility and estimate the entropy production from time-series measurements, even in the absence of observable currents. This method can be implemented in scenarios where only partial information is available and thus provides a new tool for studying nonequilibrium phenomena. Further, I will explore the added benefits achieved by nonequilibrium driving for self-assembly, identify distinctive collective phenomena that emerge in a nonequilibrium self-assembly setting, and demonstrate the interplay between the assembly speed, kinetic stability, and relative population of dynamical attractors.
Modularity of attractors in inhibition-dominated TLNs
Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. In particular, we have discovered that a special type of unstable fixed points, corresponding to "core motifs," are predictive of dynamic attractors. Recently, we have used these ideas to classify dynamic attractors in a two-parameter family of inhibition-dominated TLNs spanning all 9608 directed graphs of size n=5. Remarkably, we find a striking modularity in the dynamic attractors, with identical or near-identical attractors arising in networks that are otherwise dynamically inequivalent. This suggests that, just as one can store multiple static patterns as stable fixed points in a Hopfield model, a variety of dynamic attractors can also be embedded in a TLN in a modular fashion.
Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective
Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.
Human Single-Neuron recordings reveal neuronal mechanisms of Working Memory
Working memory (WM) is a fundamental human cognitive capacity that allows us to maintain and manipulate information stored for a short period of time in an active form. Thanks to a unique opportunity to record activity of neurons in humans during epilepsy monitoring we could test neuronal mechanisms of this cognitive capacity. We showed that firing rate of image selective neurons in Medial Temporal Lobe persists through maintenance periods of working memory task. This activity was behaviorally relevant and formed attractors in its state-space. Furthermore, we showed that firing rate of those neurons phase lock to ongoing slow-frequency oscillations. The properties of phase locking are dependent on memory content and load. During high memory loads, the phase of the oscillatory activity to which neurons phase lock provides information about memory content not available in the firing rate of the neurons.
The emergence and modulation of time in neural circuits and behavior
Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior and cortical activity. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times can be accelerated or slowed down via gain modulation, flexibly induced by neuromodulation or perturbations; and how gain modulation may control response timing in the visual cortex. Finally, we will present a new biologically plausible way to generate a reservoir of multiple timescales in cortical circuits.
Linking neural representations of space by multiple attractor networks in the entorhinal cortex and the hippocampus
In the past decade evidence has accumulated in favor of the hypothesis that multiple sub-networks in the medial entorhinal cortex (MEC) are characterized by low-dimensional, continuous attractor dynamics. Much has been learned about the joint activity of grid cells within a module (a module consists of grid cells that share a common grid spacing), but little is known about the interactions between them. Under typical conditions of spatial exploration in which sensory cues are abundant, all grid-cells in the MEC represent the animal’s position in space and their joint activity lies on a two-dimensional manifold. However, if the grid cells in a single module mechanistically constitute independent attractor networks, then under conditions in which salient sensory cues are absent, errors could accumulate in the different modules in an uncoordinated manner. Such uncoordinated errors would give rise to catastrophic readout errors when attempting to decode position from the joint grid-cell activity. I will discuss recent theoretical works from our group, in which we explored different mechanisms that could impose coordination in the different modules. One of these mechanisms involves coordination with the hippocampus and must be set up such that it operates across multiple spatial maps that represent different environments. The other mechanism is internal to the entorhinal cortex and independent of the hippocampus.
The emergence and modulation of time in neural circuits and behavior
Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.
Dynamically relevant motifs in inhibition-dominated networks
Many networks in the nervous system possess an abundance of inhibition, which serves to shape and stabilize neural dynamics. The neurons in such networks exhibit intricate patterns of connectivity whose structure controls the allowed patterns of neural activity. In this work, we examine inhibitory threshold-linear networks whose dynamics are constrained by an underlying directed graph. We develop a set of parameter-independent graph rules that enable us to predict features of the dynamics, such as emergent sequences and dynamic attractors, from properties of the graph. These rules provide a direct link between the structure and function of these networks, and may provide new insights into how connectivity shapes dynamics in real neural circuits.
A robust neural integrator based on the interactions of three time scales
Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights. In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals. I show the mechanism for this can be explained by the instability regions of the Mathieu equation. I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.
Theory of gating in recurrent neural networks
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.
Who can turn faster? Comparison of the head direction circuit of two species
Ants, bees and other insects have the ability to return to their nest or hive using a navigation strategy known as path integration. Similarly, fruit flies employ path integration to return to a previously visited food source. An important component of path integration is the ability of the insect to keep track of its heading relative to salient visual cues. A highly conserved brain region known as the central complex has been identified as being of key importance for the computations required for an insect to keep track of its heading. However, the similarities or differences of the underlying heading tracking circuit between species are not well understood. We sought to address this shortcoming by using reverse engineering techniques to derive the effective underlying neural circuits of two evolutionary distant species, the fruit fly and the locust. Our analysis revealed that regardless of the anatomical differences between the two species the essential circuit structure has not changed. Both effective neural circuits have the structural topology of a ring attractor with an eight-fold radial symmetry (Fig. 1). However, despite the strong similarities between the two ring attractors, there remain differences. Using computational modelling we found that two apparently small anatomical differences have significant functional effect on the ability of the two circuits to track fast rotational movements and to maintain a stable heading signal. In particular, the fruit fly circuit responds faster to abrupt heading changes of the animal while the locust circuit maintains a heading signal that is more robust to inhomogeneities in cell membrane properties and synaptic weights. We suggest that the effects of these differences are consistent with the behavioural ecology of the two species. On the one hand, the faster response of the ring attractor circuit in the fruit fly accommodates the fast body saccades that fruit flies are known to perform. On the other hand, the locust is a migratory species, so its behaviour demands maintenance of a defined heading for a long period of time. Our results highlight that even seemingly small differences in the distribution of dendritic fibres can have a significant effect on the dynamics of the effective ring attractor circuit with consequences for the behavioural capabilities of each species. These differences, emerging from morphologically distinct single neurons highlight the importance of a comparative approach to neuroscience.
Slow Manifold Dynamics for Working Memory are near Continuous Attractors
Bernstein Conference 2024
Only two types of attractors support representation of continuous variables, and learning over long time-spans
COSYNE 2023
The space of finite ring attractors: from theoretical principles to the fly compass system
COSYNE 2023
Coordinating control and planning for navigation on simplicial complex attractors
COSYNE 2025
Motor cortical dynamics during reaching connect posture-specific attractors
COSYNE 2025
Orthogonal line attractors in the monkey frontoparietal cortex and RNNs support hierarchical decisions
COSYNE 2025
Symmetries and continuous attractors in disordered neural circuits
COSYNE 2025
Continuous quasi-attractors thrive on irregularity, up to a limit
FENS Forum 2024