Statistical Physics
statistical physics
N/A
The hired postdoctoral researcher will mainly work on WP2, i.e., on the development of new formalisms and methods to apply to higher order interaction patterns identified in the data analyzed in WP1. The project aims to build a theoretical and data analysis framework to demonstrate the role of higher-order interactions (HOIs) in human brain networks supporting causal learning. The Hinteract project includes three scientific work packages (WPs): WP1 focuses on developing an informational theoretical approach to infer task-related HOIs from neural time series and characterizing HOIs supporting causal learning using MEG and SEEG data. WP2 involves developing a network science formalism to analyze the structure and dynamics of functional HOIs patterns and characterizing the hierarchical organization of learning-related HOIs. WP3 is about compiling and sharing neuroinformatics tools developed in the project and making them interoperable with the EBRAINS infrastructure.
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Understanding Machine Learning via Exactly Solvable Statistical Physics Models
The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
Linking dimensionality to computation in neural networks
The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.
Theory, reimagined
Physics offers countless examples for which theoretical predictions are astonishingly powerful. But it’s hard to imagine a similar precision in complex systems where the number and interdependencies between components simply prohibits a first-principles approach, look no further than the challenge of the billions of neurons and trillions of connections within our own brains. In such settings how do we even identify the important theoretical questions? We describe a systems-scale perspective in which we integrate information theory, dynamical systems and statistical physics to extract understanding directly from measurements. We demonstrate our approach with a reconstructed state space of the behavior of the nematode C. elegans, revealing a chaotic attractor with symmetric Lyapunov spectrum and a novel perspective of motor control. We then outline a maximally predictive coarse-graining in which nonlinear dynamics are subsumed into a linear, ensemble evolution to obtain a simple yet accurate model on multiple scales. With this coarse-graining we identify long timescales and collective states in the Langevin dynamics of a double-well potential, the Lorenz system and in worm behavior. We suggest that such an ``inverse’’ approach offers an emergent, quantitative framework in which to seek rather than impose effective organizing principles of complex systems.
Soft Capricious Matter: The collective behavior of particles with “noisy” interactions
Diversity in the natural world emerges from the collective behavior of large numbers of interacting objects. Statistical physics provides the framework relating microscopic to macroscopic properties. A fundamental assumption underlying this approach is that we have complete knowledge of the interactions between the microscopic entities. But what if that, even though possible in principle becomes impossible in practice ? Can we still construct a framework for describing their collective behavior ? Dense suspensions and granular materials are two often quoted examples where we face this challenge. These are systems where because of the complicated surface properties of particles there is extreme sensitivity of the interactions to particle positions. In this talk, I will present a perspective based on notions of constraint satisfaction that provides a way forward. I will focus on our recent work on the emergence of elasticity in the absence of any broken symmetry, and sketch out other problems that can be addressed using this perspective.
Biology is “messy”. So how can we take theory in biology seriously and plot predictions and experiments on the same axes?
Many of us came to biology from physics. There we have been trained on such classic examples as muon g-2, where experimental data and theoretical predictions agree to many significant digits. Now, working in biology, we routinely hear that it is messy, most details matter, and that the best hope for theory in biology is to be semi-qualitative, predict general trends, and to forgo the hope of ever making quantitative predictions with the precision that we are used to in physics. Colloquially, we should be satisfied even if data and models differ so much that plotting them on the same plot makes little sense. However, some of us won’t be satisfied by this. So can we take theory in biology seriously and predict experimental outcomes within (small) error bars? Certainly, we won’t be able to predict everything, but this is never required, even in traditional physics. But we should be able to choose some features of data that are nontrivial and interesting, and focus on them. We also should be able to find different classes of models --- maybe even null models --- that match biology better, and thus allow for a better agreement. It is even possible that large-dimensional datasets of modern high-throughput experiments, and the ensuing “more is different” statistical physics style models will make quantitative, precise theory easier. To explore the role of quantitative theory in biology, in this workshop, eight speakers will address some of the following general questions based on their specific work in different corners of biology: Which features of biological data are predictable? Which types of models are best suited to making quantitative predictions in different fields? Should theorists interested in quantitative predictions focus on different questions, not typically asked by biologists? Do large, multidimensional datasets make theories (and which theories?) more or less likely to succeed? This will be an unapologetically theoretical physics workshop — we won’t focus on a specific subfield of biology, but will explore these questions across the fields, hoping that the underlying theoretical frameworks will help us find the missing connections.
Spontaneous and driven active matter flows
Understanding individual and macroscopic transport properties of motile micro-organisms in complex environments is a timely question, relevant to many ecological, medical and technological situations. At the fundamental level, this question is also receiving a lot of attention as fluids loaded with swimming micro-organisms has become a rich domain of applications and a conceptual playground for the statistical physics of “active matter”. The existence of microscopic sources of energy borne by the motile character of these micro-swimmers is driving self-organization processes at the origin of original emergent phases and unconventional macroscopic properties leading to revisit many standard concepts in the physics of suspensions. In this presentation, I will report on a recent exploration on the question of spontaneous formation of large scale collective motion in relation with the rheological response of active suspensions. I will also present new experiments showing how the motility of bacteria can be controlled such as to extract work macroscopically.
Finding Needles in Genomic Haystacks
The ability to read the DNA sequences of different organisms has transformed biology in much the same way that the telescope transformed astronomy. And yet, much of the sequence found in these genomes is as enigmatic as the Rosetta Stone was to early Egyptologists. With the aim of making steps to crack the genomic Rosetta Stone, I will describe unexpected ways of using the physics of information transfer first developed at Bell Labs for thinking about telephone communications to try to decipher the meaning of the regulatory features of genomes. Specifically, I will show how we have been able to explore genes for which we know nothing about how they are regulated by using a combination of mutagenesis, deep sequencing and the physics of information, with the result that we now have falsifiable hypotheses about how those genes work. With those results in hand, I will show how simple tools from statistical physics can be used to predict the level of expression of different genes, followed by a description of precision measurements used to test those predictions. Bringing the two threads of the talk together, I will think about next steps in reading and writing genomes at will.
Understanding machine learning via exactly solvable statistical physics models
The affinity between statistical physics and machine learning has long history, this is reflected even in the machine learning terminology that is in part adopted from physics. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the learning algorithm.
Can machine learning learn new physics, or do we need to put it in by hand?"\
There has been a surge of publications on using machine learning (ML) on experimental data from physical systems: social, biological, statistical, and quantum. However, can these methods discover fundamentally new physics? It can be that their biggest impact is in better data preprocessing, while inferring new physics is unrealistic without specifically adapting the learning machine to find what we are looking for — that is, without the “intuition” — and hence without having a good a priori guess about what we will find. Is machine learning a useful tool for physics discovery? Which minimal knowledge should we endow the machines with to make them useful in such tasks? How do we do this? Eight speakers below will anchor the workshop, exploring these questions in contexts of diverse systems (from quantum to biological), and from general theoretical advances to specific applications. Each speaker will deliver a 10 min talk with another 10 minutes set aside for moderated questions/discussion. We expect the talks to be broad, bold, and provocative, discussing where the field is heading, and what is needed to get us there.
Exactly-solvable statistical physics model of large neuronal populations
COSYNE 2023