Biological Systems
biological systems
Katharina Wilmes
We are looking for highly motivated Postdocs or PhD students, interested in computational neuroscience, specifically addressing questions concerning neural circuits underlying perception and learning. The perfect candidate has a strong background in math, physics or computer science (or equivalent), programming skills (python), and a strong interest in biological and neural systems. A background in computational neuroscience is ideal, but not mandatory. Our brain maintains an internal model of the world, based on which it can make predictions about sensory information. These predictions are useful for perception and learning in the uncertain and changing environments in which we evolved. The link between high-level normative theories and cellular-level observations of prediction errors and representations under uncertainty is still missing. The lab uses computational and mathematical tools to model cortical circuits and neural networks on different scales.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Consciousness in the age of mechanical minds
We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Trading Off Performance and Energy in Spiking Networks
Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
Growing in flows: from evolutionary dynamics to microbial jets
Biological systems can self-organize in complex structures, able to evolve and adapt to widely varying environmental conditions. Despite the importance of fluid flow for transporting and organizing populations, few laboratory systems exist to systematically investigate the impact of advection on their spatial evolutionary dynamics. In this talk, I will discuss how we can address this problem by studying the morphology and genetic spatial structure of microbial colonies growing on the surface of a viscous substrate. When grown on a liquid, I will show that S. cerevisiae (baker’s yeast) can behave like “active matter” and collectively generate a fluid flow many times larger than the unperturbed colony expansion speed, which in turn produces mechanical stresses and fragmentation of the initial colony. Combining laboratory experiments with numerical modeling, I will demonstrate that the coupling between metabolic activity and hydrodynamic flows can produce positive feedbacks and drive preferential growth phenomena leading to the formation of microbial jets. Our work provides rich opportunities to explore the interplay between hydrodynamics, growth and competition within a versatile system.
The collective behavior of the clonal raider ant: computations, patterns, and naturalistic behavior
Colonies of ants and other eusocial insects are superorganisms, which perform sophisticated cognitive-like functions at the level of the group. In my talk I will review our efforts to establish the clonal raider ant Ooceraea biroi as a lab model system for the systematic study of the principles underlying collective information processing in ant colonies. I will use results from two separate projects to demonstrate the potential of this model system: In the first, we analyze the foraging behavior of the species, known as group raiding: a swift offensive response of a colony to the detection of a potential prey by a scout. By using automated behavioral tracking and detailed analysis we show that this behavior is closely related to the army ant mass raid, an iconic collective behavior in which hundreds of thousands of ants spontaneously leave the nest to go hunting, and that the evolutionary transition between the two can be explained by a change in colony size alone. In the second project, we study the emergence of a collective sensory response threshold in a colony. The sensory threshold is a fundamental computational primitive, observed across many biological systems. By carefully controlling the sensory environment and the social structure of the colonies we were able to show that it also appear in a collective context, and that it emerges out of a balance between excitatory and inhibitory interactions between ants. Furthermore, by using a mathematical model we predict that these two interactions can be mapped into known mechanisms of communication in ants. Finally, I will discuss the opportunities for understanding collective behavior that are opening up by the development of methods for neuroimaging and neurocontrol of our ants.
The Impact of Racism-related Stress on Neurobiological Systems in Black Americans”
Black Americans experience diverse racism-related stressors throughout the lifespan. Disproportionately high trauma exposure, economic disadvantage, explicit racism and inequitable treatment are stressors faced by many Black Americans. These experiences have a cumulative negative impact on psychological and physical health. However, little is understood about how experiences of racism, such as discrimination, can mediate health outcomes via their effects on neurobiology. I will present clinical, behavioral, physiological and neurobiological data from Black American participants in the Grady Trauma Project, a longstanding study of trauma conducted in inner-city Atlanta. These data will be discussed in the context of both risk and resilience/adaptation perspectives. Finally, recommendations for future clinical neuroscience research and targets for intervention in marginalized populations will be discussed.
On being the right size: Is the search for underlying physical principles a wild-goose chase?
When was the last time you ran into a giant? Chances are never. Almost 100 years ago, JBS Haldane posed an outwardly simple yet complex question – what is the most optimal size (for a biological system)? The living world around us contains a huge diversity of organisms, each with its own characteristic size. Even the size of subcellular organelles is tightly controlled. In absence of physical rulers, how do cells and organisms truly “know” how large is large enough? What are the mechanisms in place to enforce size control? Many of these questions have motivated generations of scientists to look for physical principles underlying size control in biological systems. In the next edition of Emory's Theory and Modeling of Living Systems (TMLS) workshop series, our panel of speakers will take a close look at these questions, across the entire scale - from the molecular, all the way to the ecosystem.
Tools for Analyzing and Repairing the Brain. (Simultaneous translation to Spanish)
To enable the understanding and repair of complex biological systems, such as the brain, we are creating novel optical tools that enable molecular-resolution maps of such systems, as well as technologies for observing and controlling high-speed physiological dynamics in such systems. First, we have developed a method for imaging specimens with nanoscale precision, by embedding them in a swellable polymer, homogenizing their mechanical properties, and exposing them to water – which causes them to expand manyfold isotropically. This method, which we call expansion microscopy (ExM), enables ordinary microscopes to do nanoscale imaging, in a multiplexed fashion – important, for example, for brain mapping. Second, we have developed a set of genetically-encoded reagents, known as optogenetic tools, that when expressed in specific neurons, enable their electrical activities to be precisely driven or silenced in response to millisecond timescale pulses of light. Finally, we are designing, and evolving, novel reagents, such as fluorescent voltage indicators and somatically targeted calcium indicators, to enable the imaging of fast physiological processes in 3-D with millisecond precision. In this way we aim to enable the systematic mapping, control, and dynamical observation of complex biological systems like the brain. The talk will be simultaneously interpreted English-Spanish) by the Interpreter, Mg. Lourdes Martino. Para permitir la comprensión y reparación de sistemas biológicos complejos, como el cerebro, estamos creando herramientas ópticas novedosas que permiten crear mapas de resolución molecular de dichos sistemas, así como tecnologías para observar y controlar la dinámica fisiológica de alta velocidad en dichos sistemas. Primero, hemos desarrollado un método para obtener imágenes de muestras con precisión a nanoescala, incrustándolas en un polímero hinchable, homogeneizando sus propiedades mecánicas y exponiéndolas al agua, lo que hace que se expandan muchas veces isotrópicamente. Este método, que llamamos microscopía de expansión (ExM), permite que los microscopios ordinarios obtengan imágenes a nanoescala, de forma multiplexada, lo que es importante, por ejemplo, para el mapeo cerebral. En segundo lugar, hemos desarrollado un conjunto de reactivos codificados genéticamente, conocidos como herramientas optogenéticas, que cuando se expresan en neuronas específicas, permiten que sus actividades eléctricas sean activadas o silenciadas con precisión en respuesta a pulsos de luz en una escala de tiempo de milisegundos. Finalmente, estamos diseñando y desarrollando reactivos novedosos, como indicadores de voltaje fluorescentes e indicadores de calcio dirigidos somáticamente, para permitir la obtención de imágenes de procesos fisiológicos rápidos en 3-D con precisión de milisegundos. De esta manera, nuestro objetivo es permitir el mapeo sistemático, el control y la observación dinámica de sistemas biológicos complejos como el cerebro. La conferencia será traducida simultáneamente al español por la intérprete Mg. Lourdes Martino.
(What) can soft matter physics teach us about biological function?
The “soft, active, and living matter” community has grown tremendously in recent years, conducting exciting research at the interface between soft matter and biological systems. But are all living systems also soft matter systems? Do the ideas of function (or purpose) in biological systems require us to introduce deep new ideas into the framework of soft matter theories? Does the (often) qualitatively different character of data in biological experiments require us to change the types of experiments we conduct and the goals of our theoretical treatments? Eight speakers will anchor the workshop, exploring these questions across a range of biological system scales. Each speaker will deliver a 10-minute talk with another 10 minutes set aside for moderated questions/discussion. We expect the talks to be broad, bold, and provocative, discussing both the nature of the theoretical tools and experimental techniques we have at present and also those we think we will ultimately need to answer deep questions at the interface of soft matter and biology.
Can machine learning learn new physics, or do we need to put it in by hand?"\
There has been a surge of publications on using machine learning (ML) on experimental data from physical systems: social, biological, statistical, and quantum. However, can these methods discover fundamentally new physics? It can be that their biggest impact is in better data preprocessing, while inferring new physics is unrealistic without specifically adapting the learning machine to find what we are looking for — that is, without the “intuition” — and hence without having a good a priori guess about what we will find. Is machine learning a useful tool for physics discovery? Which minimal knowledge should we endow the machines with to make them useful in such tasks? How do we do this? Eight speakers below will anchor the workshop, exploring these questions in contexts of diverse systems (from quantum to biological), and from general theoretical advances to specific applications. Each speaker will deliver a 10 min talk with another 10 minutes set aside for moderated questions/discussion. We expect the talks to be broad, bold, and provocative, discussing where the field is heading, and what is needed to get us there.
Spanning the arc between optimality theories and data
Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.
Beyond linear summation: Three-Body RNN for modeling complex neural and biological systems
COSYNE 2025