Resources
Authors & Affiliations
Gayathri Ramesan, Akhilesh Nandan, Daniel Koch, Aneta Koseska
Abstract
Current theoretical frameworks in neuroscience focus on attractor-based formalism, where the neural dynamics asymptotically converge to one of the system’s stable states, and these states are subsequently associated with the behavioral states of the system [1,2]. Although this framework explains a wide range of phenomena, it cannot explicitly account for transiently stable sequential neuronal activity dynamics, generated for example in olfactory encoding [3]. Recently, we have proposed a complementary theoretical framework that does not rely on (un)stable attractors, but rather on transiently stable phase-space flows generated by ghost attracting sets [4]. The ghost attracting set corresponds to a shallow-slope region in the quasi-potential landscape transiently capturing incoming trajectories, which provides robustness to the system’s dynamics, and as the trajectories autonomously escape this region, implicitly allows for flexibility in responses. Furthermore, the shallow landscape of the ghost attracting set gives rise to a slow time-scale in the system. Thus, they give rise to transiently stable slow dynamics with fast switching between different states that are robust to intrinsic noise, as experimentally observed.
To investigate the significance of such transient dynamics in the context of neural computation, we train minimal 3-node recurrent neural networks (RNNs) to perform complex temporal tasks and utilize dynamical systems theory to understand the encoding of information. As an example, we considered a working-memory task where the system is trained to reproduce the time interval between two past events upon a given cue. We utilize an objective function representing the kinetic energy of the system [5] to identify the network’s solutions, and further characterize them based on the associated largest eigenvalue. Our analysis shows that RNNs encode time intervals by directing the system's trajectories through a set of slow points, which emerge when the network is organized at criticality, in this case, close to a saddle-node bifurcation. We demonstrate that the slow point set is generated from the transiently stable ghost attracting set and enables the RNN to generalize over a range of time intervals, potentially serving as a basis to study on-the-fly learning in neural systems.