← Back

Control Theory

Topic spotlight
TopicWorld Wide

control theory

Discover seminars, jobs, and research tagged with control theory across World Wide.
15 curated items9 Seminars6 Positions
Updated 1 day ago
15 items · control theory
15 results
Position

N/A

Donders Centre for Cognition, Donders Institute for Brain, Cognition and Behaviour, School of Artificial Intelligence at Radboud University Nijmegen
Radboud University Nijmegen
Dec 5, 2025

The AI Department of the Donders Centre for Cognition (DCC), embedded in the Donders Institute for Brain, Cognition and Behaviour, and the School of Artificial Intelligence at Radboud University Nijmegen are looking for a researcher in reinforcement learning with an emphasis on safety and robustness, an interest in natural computing as well as in applications in neurotechnology and other domains such as robotics, healthcare and/or sustainability. You will be expected to perform top-quality research in (deep) reinforcement learning, actively contribute to the DBI2 consortium, interact and collaborate with other researchers and specialists in academia and/or industry, and be an inspiring member of our staff with excellent communication skills. You are also expected to engage with students through teaching and master projects not exceeding 20% of your time.

PositionNeuroscience

Prof. Jean-Pascal Pfister

University of Bern
University of Bern, 5 Bühlplatz, CH-3012 Bern, CH
Dec 5, 2025

The project aims at answering an almost 100 year old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels. The PhD students and post-doc will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern).

PositionNeuroscience

Jean-Pascal Pfister

Theoretical Neuroscience Group, Department of Physiology, University of Bern
University of Bern, Bühlplatz 5, 3012 Bern, Switzerland
Dec 5, 2025

The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.

Position

Prof. Angela Yu

Technical University of Darmstadt
TU Darmstadt, Germany
Dec 5, 2025

Prof. Angela Yu recently moved from UCSD to TU Darmstadt as the Alexander von Humboldt AI Professor, and has a number of PhD and postdoc positions available in her growing “Computational Modeling of Intelligent Systems” research group. Applications are solicited from highly motivated and qualified candidates, who are interested in interdisciplinary research at the intersection of natural and artificial intelligence. Prof. Yu’s group uses mathematically rigorous and algorithmically diverse tools to understand the nature of representation and computations that give rise to intelligent behavior. There is a fair amount of flexibility in the actual choice of project, as long as the project excites both the candidate and Prof. Yu. For example, Prof. Yu is currently interested in investigating scientific questions such as: How is socio-emotional intelligence similar or different from cognitive intelligence? Is there a fundamental tradeoff, given the prevalence of autism among scientists and engineers? How can AI be taught socio-emotional intelligence? How are artificial intelligence (e.g. as demonstrated by large language models) and natural intelligence (e.g. as measured by IQ tests) similar or different in their underlying representation or computations? What roles do intrinsic motivations such as curiosity and computational efficiency play in intelligent systems? How can insights about artificial intelligence improve the understanding and augmentation of human intelligence? Are capacity limitations with respect to attention and working memory a feature or a bug in the brain? How can AI system be enhanced by attention or WM? More broadly, Prof. Yu’s group employs and develops diverse machine learning and mathematical tools, e.g. Bayesian statistical modeling, control theory, reinforcement learning, artificial NN, and information theory, to explain various aspects of cognition important for intelligence: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions. Participants who have experience with two or more of the technical areas, and/or one or more of the application areas, are highly encouraged to apply. As part of the Centre for Cognitive Science at TU Darmstadt, the Hessian AI Center, as well as the Computer Science Department, Prof. Yu’s group members are encouraged and expected to collaborate extensively with preeminent researchers in cognitive science and AI, both nearby and internationally. All positions will be based at TU Darmstadt, Germany. Starting dates for the positions are flexible. Salaries are commensurate with experience and expertise, and highly competitive with respect to U.S. and European standards. The working language in the group and within the larger academic community is English. Fluency in German is not required; the university provides free German lessons for interested scientific staff.

PositionNeuroscience

Ann Kennedy

The Scripps Research Institute
San Diego, CA
Dec 5, 2025

The Kennedy lab is recruiting for multiple funded postdoctoral positions in theoretical and computational neuroscience, following our recent lab move to Scripps Research in San Diego, CA! Ongoing projects in the lab span topics in: reservoir computing with heterogeneous cell types, reinforcement learning/control theory analysis of complex behavior, neuromechanical whole-organism modeling, diffusion models for imitation learning/forecasting of mouse social interactions, joint analysis/modeling of effects of internal states on neural + vocalization + behavior data. With additional NIH and foundation funding for: characterizing progression of behavioral phenotypes in Parkinson’s, modeling cellular/circuit mechanisms underlying internal state-dependent changes in neural population dynamics, characterizing neural correlates of social relationships across species. Projects are flexible and can be tailored to applicants’ research and training goals, and there are abundant opportunities for new collaboration with local experimental groups. San Diego has a fantastic research community and very high quality of life. Our campus is located at the Pacific coast, at the northern edge of UCSD and not far from the Salk Institute. Postdoctoral stipends are well above NIH guidelines and include a relocation bonus, with research professorship positions available for qualified applicants.

SeminarNeuroscienceRecording

Feedback control in the nervous system: from cells and circuits to behaviour

Timothy O'Leary
Department of Engineering, University of Cambridge
May 15, 2023

The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.

SeminarNeuroscienceRecording

Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome

Linden Parkes
Rutgers Brain Health Institute
Mar 21, 2023

Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.

SeminarPhysics of LifeRecording

Towards model-based control of active matter: active nematics and oscillator networks

Michael Norton
Rochester Institute of Technology
Jan 30, 2022

The richness of active matter's spatiotemporal patterns continues to capture our imagination. Shaping these emergent dynamics into pre-determined forms of our choosing is a grand challenge in the field. To complicate matters, multiple dynamical attractors can coexist in such systems, leading to initial condition-dependent dynamics. Consequently, non-trivial spatiotemporal inputs are generally needed to access these states. Optimal control theory provides a general framework for identifying such inputs and represents a promising computational tool for guiding experiments and interacting with various systems in soft active matter and biology. As an exemplar, I first consider an extensile active nematic fluid confined to a disk. In the absence of control, the system produces two topological defects that perpetually circulate. Optimal control identifies a time-varying active stress field that restructures the director field, flipping the system to its other attractor that rotates in the opposite direction. As a second, analogous case, I examine a small network of coupled Belousov-Zhabotinsky chemical oscillators that possesses two dominant attractors, two wave states of opposing chirality. Optimal control similarly achieves the task of attractor switching. I conclude with a few forward-looking remarks on how the same model-based control approach might come to bear on problems in biology.

SeminarNeuroscienceRecording

NMC4 Keynote: A network perspective on cognitive effort

Dani Bassett
University of Pennsylvania
Nov 30, 2021

Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.

SeminarNeuroscience

Theory of gating in recurrent neural networks

Kamesh Krishnamurthy
Princeton University
Sep 15, 2020

Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.

SeminarNeuroscience

Information and Decision-Making

Daniel Polani
University of Hertfordshire
Jul 19, 2020

In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.