Resources
Authors & Affiliations
Ábel Ságodi, Guillermo Martin, Piotr Sokół, Il Park
Abstract
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for arbitrarily long time intervals.
Unfortunately, because continuous attractors are not structurally stable, they suffer from severe structural instability in general---they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations.
We observe that bifurcations from and approximations of continuous attractors in network models display various structurally stable forms. Although their asymptotic behaviors of memory are categorically distinct, their finite time behaviors are similar. Fast-slow decomposition analysis, through the identification of an invariant space with a slow time scale, uncovers the persistent manifold that survives the seemingly destructive bifurcation. To explain this phenomenon, we use Fenichel's persistence theorem from dynamical systems theory to show that bounded attractors are stable in the sense that all perturbations maintain the stability.
Furthermore, the recurrent neural networks (RNNs) trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. While the fixed point topologies show variations across networks, the universal structure of continuous attractor as slow invariant manifolds allows us to connect different topologies. Additionally, we show how the topology bounds the memory persistence of the trained networks.
To show the converse, we show how to recover continuous attractors from slow manifolds through a bounded perturbation, which shows that the uniform norm of a vector field on the manifold is a useful measure of distance from a continuous attractor. We show that this bound holds up for continuous attractor approximations of trained RNNs.
Finally, we identify four conditions for when approximate solutions to the analog working memory problem are near a continuous attractor: (1) the existence of a sufficiently smooth approximate bijection between neural activity and the memory content, (2) the speed of drift of memory content is bounded, (3) robustness against state (S-type) noise, (4) robustness against dynamical (D-type) noise.
We conclude that continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.