← Back

Phase Space

Topic spotlight
TopicWorld Wide

phase space

Discover seminars, jobs, and research tagged with phase space across World Wide.
3 curated items2 Seminars1 ePoster
Updated almost 4 years ago
3 items · phase space
3 results
SeminarPhysics of LifeRecording

Exact coherent structures and transition to turbulence in a confined active nematic

Caleb Wagner
University of Nebraska-Lincoln
Feb 27, 2022

Active matter describes a class of systems that are maintained far from equilibrium by driving forces acting on the constituent particles. Here I will focus on confined active nematics, which exhibit especially rich flow behavior, ranging from structured patterns in space and time to disordered turbulent flows. To understand this behavior, I will take a deterministic dynamical systems approach, beginning with the hydrodynamic equations for the active nematic. This approach reveals that the infinite-dimensional phase space of all possible flow configurations is populated by Exact Coherent Structures (ECS), which are exact solutions of the hydrodynamic equations with distinct and regular spatiotemporal structure; examples include unstable equilibria, periodic orbits, and traveling waves. The ECS are connected by dynamical pathways called invariant manifolds. The main hypothesis in this approach is that turbulence corresponds to a trajectory meandering in the phase space, transitioning between ECS by traveling on the invariant manifolds. Similar approaches have been successful in characterizing high Reynolds number turbulence of passive fluids. Here, I will present the first systematic study of active nematic ECS and their invariant manifolds and discuss their role in characterizing the phenomenon of active turbulence.

SeminarNeuroscienceRecording

Recurrent network dynamics lead to interference in sequential learning

Friedrich Schuessler
Barak lab, Technion, Haifa, Israel
Apr 28, 2021

Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.

ePoster

Uncovering the implicit dynamics of the spontaneous cortical activity transition to epilepsy using phase space reconstruction (PSR)

Alexia Karantana, Kostas Andrikos, Nikos Vasilopoulos, Michael Vinos, Irini Skaliora

FENS Forum 2024