Statistical Mechanics
statistical mechanics
Miguel Aguilera
The postdoc position is focused on self-organized network modelling. The project aims to develop a theory of learning in liquid brains, focusing on how liquid brains learn and their adaptive potential when embodied as an agent interacting with a changing external environment. The goal is to extend the concept of liquid brains from a theoretical concept to a useful tool for the machine learning community. This could lead to more open-ended, self-improving systems, exploiting fluid reconfiguration of nodes as an adaptive dimension which is generally unexplored. This could also allow modes of learning that avoid catastrophic forgetting, as reconfigurations in the network are based on reversible movement patterns. This could have important implications for new paradigms like edge computing.
Federico Stella
The project will focus on the computational investigation of the role of neural reactivations in memory. Since their discovery neural reactivations happening during sleep have emerged as an exceptional tool to investigate the process of memory formation in the brain. This phenomenon has been mostly associated with the hippocampus, an area known for its role in the processing of new memories and their initial storage. Continuous advancements in data acquisition techniques are giving us an unprecedented access to the activity of large-scale networks during sleep, in the hippocampus and in other cortical regions. At the same time, our theoretical understanding of the computations underlying neural reactivations and more in general memory representations, has only began to take shape. Combining mathematical modeling of neural networks and analysis of existing dataset, we will address some key aspects of this phenomenon such as: 1) The role of different sleep phases in regulating the reactivation process and in modulating the evolution of a memory trace. 2) The relationship of hippocampal reactivations to the process of (semantic) learning and knowledge generalization. 3) The relevance of reactivation statistical properties for learning in cortico-hippocampal networks.
Eleonora Russo
One Ph.D. position is available within the National Ph.D. Program in ‘Theoretical and Applied Neuroscience’. The Ph.D. will be held in the Brain Dynamics Lab at the Biorobotics Institute of Sant'Anna School of Advanced Studies, Pisa (Italy) in collaboration with the Kelsch Group at the University Medical Center, Johannes Gutenberg University, Mainz (Germany). Understanding the dynamical systems governing neuronal activity is crucial for unraveling how the brain performs cognitive functions. Historically, various forms of recurrent neural networks (RNNs) have been proposed as simplified models of the cortex. Recently, due to remarkable advancements in machine learning, RNNs' ability to capture temporal dependencies has been used to develop tools for approximating unknown dynamical systems by training them on observed time-series data. This approach allows us to use time series of electrophysiological multi-single unit recordings as well as whole brain ultra-high field functional imaging (fMRI) to parametrize neuronal population dynamics and build functional models of cognitive functions. The objective of this research project is to investigate the neuronal mechanisms underlying the reinforcement and depreciation of perceived stimuli in the extended network of the mouse forebrain regions. The PhD student will carry out his/her/their studies primarily at the BioRobotics Institute of Sant'Anna School of Advanced Studies. The project will expose the student to a highly international and interdisciplinary context, in tight collaboration with theoretical and experimental neuroscientists in Italy and abroad. At the BioRobotics Institute, the research groups involved will be the Brain Dynamics Lab, the Computational Neuroengineering Lab, and the Bioelectronics and Bioengineering Area. Moreover, the project will be carried out in tight collaboration with the experimental group of Prof. Wolfgang Kelsch, Johannes Gutenberg University, Mainz, Germany. During the PhD, the student will have the opportunity to spend a period abroad.
Mutation induced infection waves in diseases like COVID-19
After more than 4 million deaths worldwide, the ongoing vaccination to conquer the COVID-19 disease is now competing with the emergence of increasingly contagious mutations, repeatedly supplanting earlier strains. Following the near-absence of historical examples of the long-time evolution of infectious diseases under similar circumstances, models are crucial to exemplify possible scenarios. Accordingly, in the present work we systematically generalize the popular susceptible-infected-recovered model to account for mutations leading to repeatedly occurring new strains, which we coarse grain based on tools from statistical mechanics to derive a model predicting the most likely outcomes. The model predicts that mutations can induce a super exponential growth of infection numbers at early times, which self-amplify to giant infection waves which are caused by a positive feedback loop between infection numbers and mutations and lead to a simultaneous infection of the majority of the population. At later stages -- if vaccination progresses too slowly -- mutations can interrupt an ongoing decrease of infection numbers and can cause infection revivals which occur as single waves or even as whole wave trains featuring alternative periods of decreasing and increasing infection numbers. Our results might be useful for discussions regarding the importance of a release of vaccine-patents to reduce the risk of mutation-induced infection revivals but also to coordinate the release of measures following a downwards trend of infection numbers.
A dynamical model of the visual cortex
In the past several years, I have been involved in building a biologically realistic model of the monkey visual cortex. Work on one of the input layers (4Ca) of the primary visual cortex (V1) is now nearly complete, and I would like to share some of what I have learned with the community. After a brief overview of the model and its capabilities, I would like to focus on three sets of results that represent three different aspects of the modeling. They are: (i) emergent E-I dynamics in local circuits; (ii) how visual cortical neurons acquire their ability to detect edges and directions of motion, and (iii) a view across the cortical surface: nonequilibrium steady states (in analogy with statistical mechanics) and beyond.
Light-bacteria interactions
In 1676, using candle light and a small glass sphere as the lens, van Leeuwenhoek discovered the microscopic world of living microorganisms. Today, using lasers, spatial light modulators, digital cameras and computers, we study the statistical and fluid mechanics of microswimmers in ways that were unimaginable only 50 years ago. With light we can image swimming bacteria in 3D, apply controllable force fields or sculpt their 3D environment. In addition to shaping the physical world outside cells we can use light to control the internal state of genetically modified bacteria. I will review our recent work with light-bacteria interactions, going from some fundamental problems in the fluid and statistical mechanics of microswimmers to the use of bacteria as propellers for micro-machines or as a "living" paint controlled by light.
Correlations, chaos, and criticality in neural networks
The remarkable properties of information-processing of biological and of artificial neuronal networks alike arise from the interaction of large numbers of neurons. A central quest is thus to characterize their collective states. The directed coupling between pairs of neurons and their continuous dissipation of energy, moreover, cause dynamics of neuronal networks outside thermodynamic equilibrium. Tools from non-equilibrium statistical mechanics and field theory are thus instrumental to obtain a quantitative understanding. We here present progress with this recent approach [1]. On the experimental side, we show how correlations between pairs of neurons are informative on the dynamics of cortical networks: they are poised near a transition to chaos [2]. Close to this transition, we find prolongued sequential memory for past signals [3]. In the chaotic regime, networks offer representations of information whose dimensionality expands with time. We show how this mechanism aids classification performance [4]. Together these works illustrate the fruitful interplay between theoretical physics, neuronal networks, and neural information processing.
Is there universality in biology?
It is sometimes said that there are two reasons why physics is so successful as a science. One is that it deals with very simple problems. The other is that it attempts to account only for universal aspects of systems at a desired level of description, with lower level phenomena subsumed into a small number of adjustable parameters. It is a widespread belief that this approach seems unlikely to be useful in biology, which is intimidatingly complex, where “everything has an exception”, and where there are a huge number of undetermined parameters. I will try to argue, nonetheless, that there are important, experimentally-testable aspects of biology that exhibit universality, and should be amenable to being tackled from a physics perspective. My suggestion is that this can lead to useful new insights into the existence and universal characteristics of living systems. I will try to justify this point of view by contrasting the goals and practices of the field of condensed matter physics with materials science, and then by extension, the goals and practices of the newly emerging field of “Physics of Living Systems” with biology. Specific biological examples that I will discuss include the following: Universal patterns of gene expression in cell biology Universal scaling laws in ecosystems, including the species-area law, Kleiber’s law, Paradox of the Plankton Universality of the genetic code Universality of thermodynamic utilization in microbial communities Universal scaling laws in the tree of life The question of what can be learned from studying universal phenomena in biology will also be discussed. Universal phenomena, by their very nature, shed little light on detailed microscopic levels of description. Yet there is no point in seeking idiosyncratic mechanistic explanations for phenomena whose explanation is found in rather general principles, such as the central limit theorem, that every microscopic mechanism is constrained to obey. Thus, physical perspectives may be better suited to answering certain questions such as universality than traditional biological perspectives. Concomitantly, it must be recognized that the identification and understanding of universal phenomena may not be a good answer to questions that have traditionally occupied biological scientists. Lastly, I plan to talk about what is perhaps the central question of universality in biology: why does the phenomenon of life occur at all? Is it an inevitable consequence of the laws of physics or some special geochemical accident? What methodology could even begin to answer this question? I will try to explain why traditional approaches to biology do not aim to answer this question, by comparing with our understanding of superconductivity as a physical phenomenon, and with the theory of universal computation. References Nigel Goldenfeld, Tommaso Biancalani, Farshid Jafarpour. Universal biology and the statistical mechanics of early life. Phil. Trans. R. Soc. A 375, 20160341 (14 pages) (2017). Nigel Goldenfeld and Carl R. Woese. Life is Physics: evolution as a collective phenomenon far from equilibrium. Ann. Rev. Cond. Matt. Phys. 2, 375-399 (2011).