Dynamical Systems
dynamical systems
Ing. Mgr. Jaroslav Hlinka, Ph.D.
Postdoctoral / Junior Scientist position in Complex Networks and Information Theory A Postdoc or Junior Scientist position is available to join the Complex Networks and Brain Dynamics group for the project: “Network modelling of complex systems: from correlation graphs to information hypergraphs“ funded by the Czech Science Foundation. The project involves developing, optimizing and applying techniques for modelling complex dynamical systems beyond the currently available methods of complex network analysis and game theory. The project is carried out in collaboration with the Artificial Intelligence Center of the Czech Technical University. Conditions: • Contract is of 18 months duration (with the possibility of follow-up tenure-track application). • Starting date: position is available immediately. • Applications will be reviewed on a rolling basis with a first cut-off point on 30. 9. 2022. • This is a full-time fixed term contract appointment. Part time contract negotiable. • Monthly gross salary: 42 000 - 48 000 CZK based on qualifications and experience. Cost Of Living Comparison • Bonuses depending on performance and travel funding for conferences and research stays. • Contribution for reallocation costs for succesful applicant coming from abroad: 10 000 CZK plus 10 000 CZK for family (spouse and/or children). • No teaching duties
Ann Kennedy
We investigate principles of computation in meso-scale biological neural networks, and the role of these networks in shaping animal behavior. We work in collaboration with experimental neuroscientists recording neural activity in freely moving animals engaged in complex behaviors, to investigate how animals' environments, actions, and internal states are represented across multiple brain areas. Our work is especially inspired by the interaction between subcortical neural populations organized into heavily recurrent neural circuits, including basal ganglia and nuclei of the hypothalamus. Project in the lab include 1) developing novel supervised, semi-supervised, and unsupervised approaches to studying the structure of animal behavior, 2) using behavior as a common basis with which to model the interactions between multiple brain areas, and 3) studying computation and dynamics in networks of heterogenous neurons communicating with multiple neuromodulators and neuropeptides. The lab will also soon begin collecting behavioral data from freely interacting mice in a variety of model lines and animal conditions, to better chart the space of interactions between animal state and behavior expression. Come join us!
Tatiana Engel
The Engel lab in the Department of Neuroscience at Cold Spring Harbor Laboratory invites applications from highly motivated candidates for a postdoctoral position working on the cutting-edge research in computational neuroscience. We are looking for theoretical/computational scientists to work at the exciting interface of systems neuroscience, machine learning, and statistical physics, in close collaboration with experimentalists. The postdoctoral scientist is expected to exhibit resourcefulness and independence, developing computational models of large-scale neural activity recordings with the goal to elucidate neural circuit mechanisms underlying cognitive functions. Details: https://cshl.peopleadmin.com/postings/15840
Frank
Multiple open professor positions at the technical University of Applied Sciences Würzburg-Schweinfurt in Computer Vision, Reinforcement Learning, Dynamical Systems
Federico Stella
The project will focus on the computational investigation of the role of neural reactivations in memory. Since their discovery neural reactivations happening during sleep have emerged as an exceptional tool to investigate the process of memory formation in the brain. This phenomenon has been mostly associated with the hippocampus, an area known for its role in the processing of new memories and their initial storage. Continuous advancements in data acquisition techniques are giving us an unprecedented access to the activity of large-scale networks during sleep, in the hippocampus and in other cortical regions. At the same time, our theoretical understanding of the computations underlying neural reactivations and more in general memory representations, has only began to take shape. Combining mathematical modeling of neural networks and analysis of existing dataset, we will address some key aspects of this phenomenon such as: 1) The role of different sleep phases in regulating the reactivation process and in modulating the evolution of a memory trace. 2) The relationship of hippocampal reactivations to the process of (semantic) learning and knowledge generalization. 3) The relevance of reactivation statistical properties for learning in cortico-hippocampal networks.
Joseph Lizier
The successful candidates will join a dynamic interdisciplinary collaboration between A/Prof Mac Shine (Brain and Mind Centre), A/Prof Joseph Lizier (School of Computer Science) and Dr Ben Fulcher (School of Physics), within the University's Centre for Complex Systems, focused on advancing our understanding of brain function and cognition using cutting-edge computational and neuroimaging techniques at the intersection of network neuroscience, dynamical systems and information theory. The positions are funded by a grant from the Australian Research Council 'Evaluating the Network Neuroscience of Human Cognition to Improve AI'.
Dr. Udo Ernst
In this project we want to study organization and optimization of flexible information processing in neural networks, with specific focus on the visual system. You will use network modelling, numerical simulation, and mathematical analysis to investigate fundamental aspects of flexible computation such as task-dependent coordination of multiple brain areas for efficient information processing, as well as the emergence of flexible circuits originating from learning schemes which simultaneously optimize for function and flexibility. These studies will be complemented by biophysically realistic modelling and data analysis in collaboration with experimental work done in the lab of Prof. Dr. Andreas Kreiter, also at the University of Bremen. Here we will investigate selective attention as a central aspect of flexibility in the visual system, involving task-dependent coordination of multiple visual areas.
Gonzalo Uribarri
Our research group is looking for a Postdoc to work on a project involving Machine Learning and Dynamical Systems modeling applied to biomedical data. The project is part of a collaboration with Getinge, a leading MedTech company based in Stockholm, and is funded by a grant from Vinnova, the Swedish innovation agency.
Prof. Massimiliano Pontil
We are seeking a talented and motivated Postdoc to join the Computational Statistics and Machine Learning Research Units at IIT, led by Prof. Massimiliano Pontil. The successful candidate will be engaged in designing novel learning algorithms for numerical simulations of physical systems, with a focus on machine learning for dynamical systems. CSML’s core focus is on ML theory and algorithms, while significant multidisciplinary interactions with other IIT groups apply our research outputs in areas ranging from Atomistic Simulations to Neuroscience and Robotics. We have also recently started international collaboration on Climate Modelling. The group hosts applied mathematicians, computer scientists, physicists, and computer engineers, working together on theory, algorithms and applications. ML techniques, coupled with numerical simulations of physical systems have the potential to revolutionize the way in which science is conducted. Meeting this challenge requires a multi-disciplinary approach in which experts from different disciplines work together.
Dr Margarita Zachariou
We are looking for a Post-Doctoral Fellow and/or a Laboratory Scientific Officer(research assistant) to join the Bioinformatics Department of the Cyprus Institute of Neurology and Genetics. The team focuses on computational neuroscience, particularly on (1) building biophysical models of neurons and neuronal networks to study neurological diseases and (2) developing state-of-the-art analysis pipelines for neural data across scales, focusing on disease-specific patterns and integrating diverse data modalities. The successful candidate(s) will be working on multiscale models of magnetoelectric and ultrasonic effects on neuronal dynamics as part of the EU-Horizon funded META-BRAIN (https://meta-brain.eu).
Dr. Dmitrii Todorov
Title: Understanding Neural Mechanisms of Human Motor Learning by Using Explainable AI for Time Series and Brain-Computer Interfaces This PhD project will focus on uncovering mechanisms of human motor adaptation by using advanced computational tools. By analyzing (and potentially collecting new) EEG and MEG + behavioral data from multiple datasets you will explore how the brain adapts movements to external perturbations. There will also be an opportunity to test the newly obtained understanding using a brain-computer interface (BCI) protocol. The project will be co-supervised by Dr. Dmitrii Todorov and Dr. Veronique Marchand-Pauvert, and will be carried out within an international interdisciplinary team.
Probing neural population dynamics with recurrent neural networks
Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
The balance hypothesis for the avian lumbosacral organ and an exploration of its morphological variation
The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Dynamic endocrine modulation of the nervous system
Sex hormones are powerful neuromodulators of learning and memory. In rodents and nonhuman primates estrogen and progesterone influence the central nervous system across a range of spatiotemporal scales. Yet, their influence on the structural and functional architecture of the human brain is largely unknown. Here, I highlight findings from a series of dense-sampling neuroimaging studies from my laboratory designed to probe the dynamic interplay between the nervous and endocrine systems. Individuals underwent brain imaging and venipuncture every 12-24 hours for 30 consecutive days. These procedures were carried out under freely cycling conditions and again under a pharmacological regimen that chronically suppresses sex hormone production. First, resting state fMRI evidence suggests that transient increases in estrogen drive robust increases in functional connectivity across the brain. Time-lagged methods from dynamical systems analysis further reveals that these transient changes in estrogen enhance within-network integration (i.e. global efficiency) in several large-scale brain networks, particularly Default Mode and Dorsal Attention Networks. Next, using high-resolution hippocampal subfield imaging, we found that intrinsic hormone fluctuations and exogenous hormone manipulations can rapidly and dynamically shape medial temporal lobe morphology. Together, these findings suggest that neuroendocrine factors influence the brain over short and protracted timescales.
Extracting computational mechanisms from neural data using low-rank RNNs
An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.
Nonlinear computations in spiking neural networks through multiplicative synapses
The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Eliminativism about Neural Representation
Exact coherent structures and transition to turbulence in a confined active nematic
Active matter describes a class of systems that are maintained far from equilibrium by driving forces acting on the constituent particles. Here I will focus on confined active nematics, which exhibit especially rich flow behavior, ranging from structured patterns in space and time to disordered turbulent flows. To understand this behavior, I will take a deterministic dynamical systems approach, beginning with the hydrodynamic equations for the active nematic. This approach reveals that the infinite-dimensional phase space of all possible flow configurations is populated by Exact Coherent Structures (ECS), which are exact solutions of the hydrodynamic equations with distinct and regular spatiotemporal structure; examples include unstable equilibria, periodic orbits, and traveling waves. The ECS are connected by dynamical pathways called invariant manifolds. The main hypothesis in this approach is that turbulence corresponds to a trajectory meandering in the phase space, transitioning between ECS by traveling on the invariant manifolds. Similar approaches have been successful in characterizing high Reynolds number turbulence of passive fluids. Here, I will present the first systematic study of active nematic ECS and their invariant manifolds and discuss their role in characterizing the phenomenon of active turbulence.
Neural Population Dynamics for Skilled Motor Control
The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.
Credit Assignment in Neural Networks through Deep Feedback Control
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motives, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.
Dynamical Neuromorphic Systems
In this talk, I aim to show that the dynamical properties of emerging nanodevices can accelerate the development of smart, and environmentally friendly chips that inherently learn through their physics. The goal of neuromorphic computing is to draw inspiration from the architecture of the brain to build low-power circuits for artificial intelligence. I will first give a brief overview of the state of the art of neuromorphic computing, highlighting the opportunities offered by emerging nanodevices in this field, and the associated challenges. I will then show that the intrinsic dynamical properties of these nanodevices can be exploited at the device and algorithmic level to assemble systems that infer and learn though their physics. I will illustrate these possibilities with examples from our work on spintronic neural networks that communicate and compute through their microwave oscillations, and on an algorithm called Equilibrium Propagation that minimizes both the error and energy of a dynamical system.
Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective
Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.
Linking dimensionality to computation in neural networks
The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.
Theory, reimagined
Physics offers countless examples for which theoretical predictions are astonishingly powerful. But it’s hard to imagine a similar precision in complex systems where the number and interdependencies between components simply prohibits a first-principles approach, look no further than the challenge of the billions of neurons and trillions of connections within our own brains. In such settings how do we even identify the important theoretical questions? We describe a systems-scale perspective in which we integrate information theory, dynamical systems and statistical physics to extract understanding directly from measurements. We demonstrate our approach with a reconstructed state space of the behavior of the nematode C. elegans, revealing a chaotic attractor with symmetric Lyapunov spectrum and a novel perspective of motor control. We then outline a maximally predictive coarse-graining in which nonlinear dynamics are subsumed into a linear, ensemble evolution to obtain a simple yet accurate model on multiple scales. With this coarse-graining we identify long timescales and collective states in the Langevin dynamics of a double-well potential, the Lorenz system and in worm behavior. We suggest that such an ``inverse’’ approach offers an emergent, quantitative framework in which to seek rather than impose effective organizing principles of complex systems.
Simons-Emory Workshop on Neural Dynamics: What could neural dynamics have to say about neural computation, and do we know how to listen?
Speakers will deliver focused 10-minute talks, with periods reserved for broader discussion on topics at the intersection of neural dynamics and computation. Organizer & Moderator: Chethan Pandarinath - Emory University and Georgia Tech Speakers & Discussants: Adrienne Fairhall - U Washington Mehrdad Jazayeri - MIT John Krakauer - John Hopkins Francesca Mastrogiuseppe - Gatsby / UCL Abigail Person - U Colorado Abigail Russo - Princeton Krishna Shenoy - Stanford Saurabh Vyas - Columbia
Pancreatic α and β cells are globally phase-locked
The Ca2+ modulated pulsatile secretions of glucagon and insulin by pancreatic α and β cells play a key role in glucose metabolism and homeostasis. However, how different types of cells in the islet couple and coordinate to give rise to various Ca2+ oscillation patterns and how these patterns are being tuned by paracrine regulation are still elusive. Here we developed a microfluidic device to facilitate long-term recording of islet Ca2+ activity at single cell level and found that islets show heterogeneous but intrinsic oscillation patterns. The α and β cells in an islet oscillate in antiphase and are globally phase locked to display a variety of oscillation modes. A mathematical model of islet oscillation maps out the dependence of the oscillation modes on the paracrine interactions between α and β cells. Our study reveals the origin of the islet oscillation patterns and highlights the role of paracrine regulation in tuning them.
Neural manifolds for the stable control of movement
Animals perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns: neural modes that capture the dominant co-variation patterns of population activity and define a task specific low dimensional neural manifold. The time-dependent activation of the neural modes results in latent dynamics. We hypothesize that the latent dynamics associated with the consistent execution of a behaviour need to remain stable, and use an alignment method to establish this stability. Once identified, stable latent dynamics allow for the prediction of various behavioural features via fixed decoder models. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behaviour.
Modeling gait dynamics with switching non-linear dynamical systems
Bernstein Conference 2024
Neural manifold discovery via dynamical systems
Bernstein Conference 2024
Using Dynamical Systems Theory to Improve Temporal Credit Assignment in Spiking Neural Networks
Bernstein Conference 2024
Data-driven dynamical systems model of epilepsy development simulates intervention strategies
COSYNE 2022
Dynamical systems analysis reveals a novel hypothalamic encoding of state in nodes controlling social behavior
COSYNE 2022
Modeling multi-region neural communication during decision making with recurrent switching dynamical systems
COSYNE 2022
Modeling multi-region neural communication during decision making with recurrent switching dynamical systems
COSYNE 2022
Decomposed linear dynamical systems for C. elegans functional connectivity
COSYNE 2023
Parsing neural dynamics with infinite recurrent switching linear dynamical systems
COSYNE 2023
Capturing condition dependence in neural dynamics with Gaussian process linear dynamical systems
COSYNE 2025
Neural manifold discovery via dynamical systems
COSYNE 2025
Task Structures Shape Underlying Dynamical Systems That Implement Computation
COSYNE 2025
Understanding the effects of neural perturbations using cell-type dynamical systems
COSYNE 2025
Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics
Neuromatch 5