Control Theory
control theory
N/A
The AI Department of the Donders Centre for Cognition (DCC), embedded in the Donders Institute for Brain, Cognition and Behaviour, and the School of Artificial Intelligence at Radboud University Nijmegen are looking for a researcher in reinforcement learning with an emphasis on safety and robustness, an interest in natural computing as well as in applications in neurotechnology and other domains such as robotics, healthcare and/or sustainability. You will be expected to perform top-quality research in (deep) reinforcement learning, actively contribute to the DBI2 consortium, interact and collaborate with other researchers and specialists in academia and/or industry, and be an inspiring member of our staff with excellent communication skills. You are also expected to engage with students through teaching and master projects not exceeding 20% of your time.
Prof. Jean-Pascal Pfister
The project aims at answering an almost 100 year old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels. The PhD students and post-doc will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern).
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
Prof. Angela Yu
Prof. Angela Yu recently moved from UCSD to TU Darmstadt as the Alexander von Humboldt AI Professor, and has a number of PhD and postdoc positions available in her growing “Computational Modeling of Intelligent Systems” research group. Applications are solicited from highly motivated and qualified candidates, who are interested in interdisciplinary research at the intersection of natural and artificial intelligence. Prof. Yu’s group uses mathematically rigorous and algorithmically diverse tools to understand the nature of representation and computations that give rise to intelligent behavior. There is a fair amount of flexibility in the actual choice of project, as long as the project excites both the candidate and Prof. Yu. For example, Prof. Yu is currently interested in investigating scientific questions such as: How is socio-emotional intelligence similar or different from cognitive intelligence? Is there a fundamental tradeoff, given the prevalence of autism among scientists and engineers? How can AI be taught socio-emotional intelligence? How are artificial intelligence (e.g. as demonstrated by large language models) and natural intelligence (e.g. as measured by IQ tests) similar or different in their underlying representation or computations? What roles do intrinsic motivations such as curiosity and computational efficiency play in intelligent systems? How can insights about artificial intelligence improve the understanding and augmentation of human intelligence? Are capacity limitations with respect to attention and working memory a feature or a bug in the brain? How can AI system be enhanced by attention or WM? More broadly, Prof. Yu’s group employs and develops diverse machine learning and mathematical tools, e.g. Bayesian statistical modeling, control theory, reinforcement learning, artificial NN, and information theory, to explain various aspects of cognition important for intelligence: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions. Participants who have experience with two or more of the technical areas, and/or one or more of the application areas, are highly encouraged to apply. As part of the Centre for Cognitive Science at TU Darmstadt, the Hessian AI Center, as well as the Computer Science Department, Prof. Yu’s group members are encouraged and expected to collaborate extensively with preeminent researchers in cognitive science and AI, both nearby and internationally. All positions will be based at TU Darmstadt, Germany. Starting dates for the positions are flexible. Salaries are commensurate with experience and expertise, and highly competitive with respect to U.S. and European standards. The working language in the group and within the larger academic community is English. Fluency in German is not required; the university provides free German lessons for interested scientific staff.
Ann Kennedy
The Kennedy lab is recruiting for multiple funded postdoctoral positions in theoretical and computational neuroscience, following our recent lab move to Scripps Research in San Diego, CA! Ongoing projects in the lab span topics in: reservoir computing with heterogeneous cell types, reinforcement learning/control theory analysis of complex behavior, neuromechanical whole-organism modeling, diffusion models for imitation learning/forecasting of mouse social interactions, joint analysis/modeling of effects of internal states on neural + vocalization + behavior data. With additional NIH and foundation funding for: characterizing progression of behavioral phenotypes in Parkinson’s, modeling cellular/circuit mechanisms underlying internal state-dependent changes in neural population dynamics, characterizing neural correlates of social relationships across species. Projects are flexible and can be tailored to applicants’ research and training goals, and there are abundant opportunities for new collaboration with local experimental groups. San Diego has a fantastic research community and very high quality of life. Our campus is located at the Pacific coast, at the northern edge of UCSD and not far from the Salk Institute. Postdoctoral stipends are well above NIH guidelines and include a relocation bonus, with research professorship positions available for qualified applicants.
Prof. Angela Yu
Multiple PhD and postdoctoral positions are immediately available in Prof. Angela Yu's research group at TU Darmstadt. The group investigates the intersection of natural and artificial intelligence using mathematically rigorous approaches to understand the representations and computations underlying intelligent behavior. The research particularly addresses challenges of inferential uncertainty and opportunities of volitional control. The group employs diverse methodological tools including Bayesian statistical modeling, control theory, reinforcement learning, and information theory to develop theoretical frameworks explaining key aspects of cognition: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome
Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.
Towards model-based control of active matter: active nematics and oscillator networks
The richness of active matter's spatiotemporal patterns continues to capture our imagination. Shaping these emergent dynamics into pre-determined forms of our choosing is a grand challenge in the field. To complicate matters, multiple dynamical attractors can coexist in such systems, leading to initial condition-dependent dynamics. Consequently, non-trivial spatiotemporal inputs are generally needed to access these states. Optimal control theory provides a general framework for identifying such inputs and represents a promising computational tool for guiding experiments and interacting with various systems in soft active matter and biology. As an exemplar, I first consider an extensile active nematic fluid confined to a disk. In the absence of control, the system produces two topological defects that perpetually circulate. Optimal control identifies a time-varying active stress field that restructures the director field, flipping the system to its other attractor that rotates in the opposite direction. As a second, analogous case, I examine a small network of coupled Belousov-Zhabotinsky chemical oscillators that possesses two dominant attractors, two wave states of opposing chirality. Optimal control similarly achieves the task of attractor switching. I conclude with a few forward-looking remarks on how the same model-based control approach might come to bear on problems in biology.
NMC4 Keynote: A network perspective on cognitive effort
Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.
Deep kernel methods
Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.
Advances in Computational Psychiatry: Understanding (cognitive) control as a network process
The human brain is a complex organ characterized by heterogeneous patterns of interconnections. Non-invasive imaging techniques now allow for these patterns to be carefully and comprehensively mapped in individual humans, paving the way for a better understanding of how wiring supports cognitive processes. While a large body of work now focuses on descriptive statistics to characterize these wiring patterns, a critical open question lies in how the organization of these networks constrains the potential repertoire of brain dynamics. In this talk, I will describe an approach for understanding how perturbations to brain dynamics propagate through complex wiring patterns, driving the brain into new states of activity. Drawing on a range of disciplinary tools – from graph theory to network control theory and optimization – I will identify control points in brain networks and characterize trajectories of brain activity states following perturbation to those points. Finally, I will describe how these computational tools and approaches can be used to better understand the brain's intrinsic control mechanisms and their alterations in psychiatric conditions.
Firing Homeostasis in Neural Circuits: From Basic Principles to Malfunctions
Neural circuit functions are stabilized by homeostatic mechanisms at long timescales in response to changes in experience and learning. However, we still do not know which specific physiological variables are being stabilized, nor which cellular or neural-network components comprise the homeostatic machinery. At this point, most evidence suggests that the distribution of firing rates amongst neurons in a brain circuit is the key variable that is maintained around a circuit-specific set-point value in a process called firing rate homeostasis. Here, I will discuss our recent findings that implicate mitochondria as a central player in mediating firing rate homeostasis and its impairments. While mitochondria are known to regulate neuronal variables such as synaptic vesicle release or intracellular calcium concentration, we searched for the mitochondrial signaling pathways that are essential for homeostatic regulation of firing rates. We utilize basic concepts of control theory to build a framework for classifying possible components of the homeostatic machinery in neural networks. This framework may facilitate the identification of new homeostatic pathways whose malfunctions drive instability of neural circuits in distinct brain disorders.
Theory of gating in recurrent neural networks
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.
Information and Decision-Making
In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.