Information Theory
information theory
Matthew Chalk
A postdoctoral position is available for a project with Matthew Chalk (https://matthewjchalk.wixsite.com/mysite), at the Vision Institute (www.institut-vision.org/en/), within the Sorbonne Université, in Paris, France. The project will involve investigating principles of neural coding in the retina. Specifically, the project will investigate how different coding objectives, such as optimising efficiency or encoding predictive information, can explain the diverse ways that neurons in the retina respond to visual stimulation. The project will extend previous work by Chalk et al. to develop a general theory of optimal neural coding (Chalk et al. PNAS 2018, Chalk et al. 2022 biorxiv). For this, we will use a range of computational techniques including gaussian processes (Goldin et al. 2023 PNAS) and information theory. The project is part of an exciting interdisciplinary collaboration between theorists and experimentalists at the Vision Institute (Olivier Marre; http://oliviermarre.free.fr), and Thomas Euler (https://eulerlab.de) and Philip Berens (https://www.eye-tuebingen.de/berenslab/) at Tuebingen University. The Vision Institute is a stimulating environment for brain research. It brings together in a single building researchers, clinicians and industrial partners in order to discover, test and develop treatments and technological innovations for the benefit of visually impaired patients. The candidate will have a PhD with a strong, quantitative background (ideally in fields such as machine learning, theoretical neuroscience or physics). They will have a good grasp of oral and written English (French is not required). Most of all, they will enjoy tackling new problems with enthusiasm and as part of a team. The position is funded for three years. Applications should include a CV, a statement of research interests (~ 1 page), and two letters of recommendation. Electronic submissions in pdf-format are preferred and should be sent to Matthew Chalk (matthewjchalk@gmail.com). Feel free to ask any informal questions about the position if you are interested.
Ing. Mgr. Jaroslav Hlinka, Ph.D.
Postdoctoral / Junior Scientist position in Complex Networks and Information Theory A Postdoc or Junior Scientist position is available to join the Complex Networks and Brain Dynamics group for the project: “Network modelling of complex systems: from correlation graphs to information hypergraphs“ funded by the Czech Science Foundation. The project involves developing, optimizing and applying techniques for modelling complex dynamical systems beyond the currently available methods of complex network analysis and game theory. The project is carried out in collaboration with the Artificial Intelligence Center of the Czech Technical University. Conditions: • Contract is of 18 months duration (with the possibility of follow-up tenure-track application). • Starting date: position is available immediately. • Applications will be reviewed on a rolling basis with a first cut-off point on 30. 9. 2022. • This is a full-time fixed term contract appointment. Part time contract negotiable. • Monthly gross salary: 42 000 - 48 000 CZK based on qualifications and experience. Cost Of Living Comparison • Bonuses depending on performance and travel funding for conferences and research stays. • Contribution for reallocation costs for succesful applicant coming from abroad: 10 000 CZK plus 10 000 CZK for family (spouse and/or children). • No teaching duties
Kerstin Bunte
We offer a postdoctoral researcher position within the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence at the University of Groningen, The Netherlands. The position is funded by an NWO Vidi project named “mechanistic machine learning: combining the explanatory power of dynamic models with the predictive power of machine learning“. Systems of Artificial Intelligence (AI) and Machine Learning (ML) gained a tremendous amount of interest in recent years, demonstrating great performance for a wide variety of tasks, but typically only if they are trained on huge amounts of data. Moreover, frequently no insight into the decision making is available or required. Experts desire to know how their data can inform them about the natural processes being measured. Therefore we develop transparent and interpretable model- and data-driven hybrid methods that are demonstrated for applications in medicine and engineering. As a postdoc, you will work together with Kerstin Bunte and her team within the Intelligent Systems group, as well as a network of interdisciplinary collaborators in the UK and Europe from various fields, such as Computer Science, Engineering and Applied Mathematics.
Miguel Aguilera
The postdoc position is focused on self-organized network modelling. The project aims to develop a theory of learning in liquid brains, focusing on how liquid brains learn and their adaptive potential when embodied as an agent interacting with a changing external environment. The goal is to extend the concept of liquid brains from a theoretical concept to a useful tool for the machine learning community. This could lead to more open-ended, self-improving systems, exploiting fluid reconfiguration of nodes as an adaptive dimension which is generally unexplored. This could also allow modes of learning that avoid catastrophic forgetting, as reconfigurations in the network are based on reversible movement patterns. This could have important implications for new paradigms like edge computing.
Silvia Lopez-Guzman
The Unit on Computational Decision Neuroscience (CDN) at the National Institute of Mental Health is seeking a full-time Data Scientist/Data Analyst. The lab is focused on understanding the neural and computational bases of adaptive and maladaptive decision-making and their relationship to mental health. Current studies investigate how internal states lead to biases in decision-making and how this is exacerbated in mental health disorders. Our approach involves a combination of computational model-based tasks, questionnaires, biosensor data, fMRI, and intracranial recordings. The main models of interest come from neuroeconomics, reinforcement learning, Bayesian inference, signal detection, and information theory. The main tasks for this position include computational modeling of behavioral data from decision-making and other cognitive tasks, statistical analysis of task-based, clinical, physiological and neuroimaging data, as well as data visualization for scientific presentations, public communication, and academic manuscripts. The candidate is expected to demonstrate experience with best practices for the development of well-documented, reproducible programming pipelines for data analysis, that facilitate sharing and collaboration, and live up to our open-science philosophy, as well as to our data management and sharing commitments at NIH.
Joseph Lizier
The successful candidates will join a dynamic interdisciplinary collaboration between A/Prof Mac Shine (Brain and Mind Centre), A/Prof Joseph Lizier (School of Computer Science) and Dr Ben Fulcher (School of Physics), within the University's Centre for Complex Systems, focused on advancing our understanding of brain function and cognition using cutting-edge computational and neuroimaging techniques at the intersection of network neuroscience, dynamical systems and information theory. The positions are funded by a grant from the Australian Research Council 'Evaluating the Network Neuroscience of Human Cognition to Improve AI'.
Dr. Nicola Catenacci Volpi
This PhD project will push the boundaries of Continual Reinforcement Learning by investigating how agents can continuously learn and adapt over time, how they can autonomously develop and flexibly apply an ever-expanding repertoire of skills across various tasks, and what representations allow them to do this efficiently. The project aims to create AI systems that can sustain autonomous learning and adaptation in ever-changing environments with limited computational resources. The selected candidate will master and contribute to techniques in deep reinforcement learning, incorporating principles from probabilistic machine learning, such as information theory, intrinsic motivation, and open-ended learning frameworks. The project may use computer games as benchmarking tools or apply findings to robotic systems, including manipulators, intelligent autonomous vehicles, and humanoid robots.
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
Prof. Angela Yu
Prof. Angela Yu recently moved from UCSD to TU Darmstadt as the Alexander von Humboldt AI Professor, and has a number of PhD and postdoc positions available in her growing “Computational Modeling of Intelligent Systems” research group. Applications are solicited from highly motivated and qualified candidates, who are interested in interdisciplinary research at the intersection of natural and artificial intelligence. Prof. Yu’s group uses mathematically rigorous and algorithmically diverse tools to understand the nature of representation and computations that give rise to intelligent behavior. There is a fair amount of flexibility in the actual choice of project, as long as the project excites both the candidate and Prof. Yu. For example, Prof. Yu is currently interested in investigating scientific questions such as: How is socio-emotional intelligence similar or different from cognitive intelligence? Is there a fundamental tradeoff, given the prevalence of autism among scientists and engineers? How can AI be taught socio-emotional intelligence? How are artificial intelligence (e.g. as demonstrated by large language models) and natural intelligence (e.g. as measured by IQ tests) similar or different in their underlying representation or computations? What roles do intrinsic motivations such as curiosity and computational efficiency play in intelligent systems? How can insights about artificial intelligence improve the understanding and augmentation of human intelligence? Are capacity limitations with respect to attention and working memory a feature or a bug in the brain? How can AI system be enhanced by attention or WM? More broadly, Prof. Yu’s group employs and develops diverse machine learning and mathematical tools, e.g. Bayesian statistical modeling, control theory, reinforcement learning, artificial NN, and information theory, to explain various aspects of cognition important for intelligence: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions. Participants who have experience with two or more of the technical areas, and/or one or more of the application areas, are highly encouraged to apply. As part of the Centre for Cognitive Science at TU Darmstadt, the Hessian AI Center, as well as the Computer Science Department, Prof. Yu’s group members are encouraged and expected to collaborate extensively with preeminent researchers in cognitive science and AI, both nearby and internationally. All positions will be based at TU Darmstadt, Germany. Starting dates for the positions are flexible. Salaries are commensurate with experience and expertise, and highly competitive with respect to U.S. and European standards. The working language in the group and within the larger academic community is English. Fluency in German is not required; the university provides free German lessons for interested scientific staff.
Prof. Angela Yu
Multiple PhD and postdoctoral positions are immediately available in Prof. Angela Yu's research group at TU Darmstadt. The group investigates the intersection of natural and artificial intelligence using mathematically rigorous approaches to understand the representations and computations underlying intelligent behavior. The research particularly addresses challenges of inferential uncertainty and opportunities of volitional control. The group employs diverse methodological tools including Bayesian statistical modeling, control theory, reinforcement learning, and information theory to develop theoretical frameworks explaining key aspects of cognition: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions.
Dr.Siwei Wang
The NeuroAI group led by Dr.Siwei Wang in the Department of Neurobiology and Behavior at Stony Brook University is seeking a highly motivated Postdoctoral Research Fellow for an interdisciplinary project at the intersection of machine learning, signal processing, and neuroscience. The successful candidate will apply advanced machine learning, wavelet analysis, information theory, and topological data analysis techniques to uncover hidden neurobiological structure in complex neural and behavioral time-series data. We welcome candidates who are motivated to answer the following questions: 1) How can we discover and quantify semantic content in high-dimensional neural time series without ground truth? 2) What do these patterns reveal about fundamental principles of brain function and behavior? 3) How can the theory/method developed for these complex time series translate to insights in mathematical foundations for machine learning? This project tackles real-world neuroscience questions about how brain activity and behavior are organized. We are looking for a technically strong researcher who is excited to bridge cutting-edge computational methods with fundamental questions in neurobiology. Dr.Siwei Wang is an NITMB external affiliate member. The postdoc in her group will join NITMB community through co-mentorship by leading visual neuroscientist Dr.Gregory Schwartz at Northwestern University or system Neuroscientist Dr. Jason Maclean at University of Chicago. This dual mentorship ensures that the computational advances are tightly linked to cutting-edge experimental questions and data. The postdoctoral fellow will regularly engage with both mentors’ research groups, benefiting from their domain expertise and resources. The postdoctoral fellow will also have the opportunity to participate and contribute to the NITMB community through workshops and research seminars. This NITMB enabled co-mentorship is designed to enhance the fellow’s visibility in both the computational and experimental neuroscience communities and to foster innovative, well-rounded skill development.
Dr Siwei Wang
The NeuroAI group led by Dr.Siwei Wang in the Department of Neurobiology and Behavior at Stony Brook University is seeking a highly motivated Postdoctoral Research Fellow for an interdisciplinary project at the intersection of machine learning, signal processing, and neuroscience. The successful candidate will apply advanced machine learning, wavelet analysis, information theory, and topological data analysis techniques to uncover hidden neurobiological structure in complex neural and behavioral time-series data. We welcome candidates who are motivated to answer the following questions: 1) How can we discover and quantify semantic content in high-dimensional neural time series without ground truth? 2) What do these patterns reveal about fundamental principles of brain function and behavior? 3) How can the theory/method developed for these complex time series translate to insights in mathematical foundations for machine learning? This project tackles real-world neuroscience questions about how brain activity and behavior are organized. We are looking for a technically strong researcher who is excited to bridge cutting-edge computational methods with fundamental questions in neurobiology. Dr.Siwei Wang is an NITMB external affiliate member. The postdoc in her group will join NITMB community through co-mentorship by leading visual neuroscientist Dr.Gregory Schwartz at Northwestern University or system Neuroscientist Dr. Jason Maclean at University of Chicago. This dual mentorship ensures that the computational advances are tightly linked to cutting-edge experimental questions and data. The postdoctoral fellow will regularly engage with both mentors’ research groups, benefiting from their domain expertise and resources. The postdoctoral fellow will also have the opportunity to participate and contribute to the NITMB community through workshops and research seminars. This NITMB enabled co-mentorship is designed to enhance the fellow’s visibility in both the computational and experimental neuroscience communities and to foster innovative, well-rounded skill development.
Integrating theory-guided and data-driven approaches for measuring consciousness
Clinical assessment of consciousness is a significant issue, with recent research suggesting some brain-damaged patients who are assessed as unconscious are in fact conscious. Misdiagnosis of consciousness can also be detrimental when it comes to general anaesthesia, causing numerous psychological problems, including post-traumatic stress disorder. Avoiding awareness with overdose of anaesthetics, however, can also lead to cognitive impairment. Currently available objective assessment of consciousness is limited in accuracy or requires expensive equipment with major barriers to translation. In this talk, we will outline our recent theory-guided and data-driven approaches to develop new, optimized consciousness measures that will be robustly evaluated on an unprecedented breadth of high-quality neural data, recorded from the fly model system. We will overcome the subjective-choice problem in data-driven and theory-guided approaches with a comprehensive data analytic framework, which has never been applied to consciousness detection, integrating previously disconnected streams of research in consciousness detection to accelerate the translation of objective consciousness measures into clinical settings.
Inferring informational structures in neural recordings of drosophila with epsilon-machines
Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.
Through the bottleneck: my adventures with the 'Tishby program'
One of Tali's cherished goals was to transform biology into physics. In his view, biologists were far too enamored by the details of the specific models they studied, losing sight of the big principles that may govern the behavior of these models. One such big principle that he suggested was the 'information bottleneck (IB) principle'. The iIB principle is an information-theoretical approach for extracting the relevant information that one random variable carries about another. Tali applied the IB principle to numerous problems in biology, gaining important insights in the process. Here I will describe two applications of the IB principle to neurobiological data. The first is the formalization of the notion of surprise that allowed us to rigorously estimate the memory duration and content of neuronal responses in auditory cortex, and the second is an application to behavior, allowing us to estimate 'optimal policies under information constraints' that shed interesting light on rat behavior.
Information Dynamics in the Hippocampus and Cortex and their alterations in epilepsy
Neurological disorders share common high-level alterations, such as cognitive deficits, anxiety, and depression. This raises the possibility of fundamental alterations in the way information conveyed by neural firing is maintained and dispatched in the diseased brain. Using experimental epilepsy as a model of neurological disorder we tested the hypothesis of altered information processing, analyzing how neurons in the hippocampus and the entorhinal cortex store and exchange information during slow and theta oscillations. We equate the storage and sharing of information to low level, or primitive, information processing at the algorithmic level, the theoretical intermediate level between structure and function. We find that these low-level processes are organized into substates during brain states marked by theta and slow oscillations. Their internal composition and organization through time are disrupted in epilepsy, losing brain state-specificity, and shifting towards a regime of disorder in a brain region dependent manner. We propose that the alteration of information processing at an algorithmic level may be a mechanism behind the emergent and widespread co-morbidities associated with epilepsy, and perhaps other disorders.
In search of me: a theoretical approach to identify the neural substrate of consciousness
A major neuroscientific challenge is to identify the neural mechanisms that support consciousness. Though experimental studies have accumulated evidence about the location of the neural substrate of consciousness, we still lack a full understanding of why certain brain areas, but not others, can support consciousness. In this talk, I will give an overview of our approach, taking advantage of the theoretical framework provided by Integrated Information Theory (IIT). First, I will introduce results showing that a maximum of integrated information within the human brain matches our best evidence concerning the location of the NSC, supporting the IIT’s prediction. Furthermore, I will discuss the possibility that the NSC can change its location and even split into two depending on the task demand. Finally, based on some graph-theoretical analyses, I will argue that the ability of different brain regions to contribute or not to consciousness depends on specific properties of their anatomical connectivity, which determines their ability to support high integrated information.
Integrated Information Theory and Its Implications for Free Will
Integrated information theory (IIT) takes as its starting point phenomenology, rather than behavioral, functional, or neural correlates of consciousness. The theory characterizes the essential properties of phenomenal existence—which is immediate and indubitable. These are translated into physical properties, expressed operationally as cause-effect power, which must be satisfied by the neural substrate of consciousness. On this basis, the theory can account for clinical and experimental data about the presence and absence of consciousness. Current work aims at accounting for specific qualities of different experiences, such as spatial extendedness and the flow of time. Several implications of IIT have ethical relevance. One is that functional equivalence does not imply phenomenal equivalence—computers may one day be able to do everything we do, but they will not experience anything. Another is that we do have free will in the fundamental, metaphysical sense—we have true alternatives and we, not our neurons, are the true cause of our willed actions.
Reading out responses of large neural population with minimal information loss
Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signal communication between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.
Theory, reimagined
Physics offers countless examples for which theoretical predictions are astonishingly powerful. But it’s hard to imagine a similar precision in complex systems where the number and interdependencies between components simply prohibits a first-principles approach, look no further than the challenge of the billions of neurons and trillions of connections within our own brains. In such settings how do we even identify the important theoretical questions? We describe a systems-scale perspective in which we integrate information theory, dynamical systems and statistical physics to extract understanding directly from measurements. We demonstrate our approach with a reconstructed state space of the behavior of the nematode C. elegans, revealing a chaotic attractor with symmetric Lyapunov spectrum and a novel perspective of motor control. We then outline a maximally predictive coarse-graining in which nonlinear dynamics are subsumed into a linear, ensemble evolution to obtain a simple yet accurate model on multiple scales. With this coarse-graining we identify long timescales and collective states in the Langevin dynamics of a double-well potential, the Lorenz system and in worm behavior. We suggest that such an ``inverse’’ approach offers an emergent, quantitative framework in which to seek rather than impose effective organizing principles of complex systems.
The 3 Cs: Collaborating to Crack Consciousness
Every day when we fall asleep we lose consciousness, we are not there. And then, every morning, when we wake up, we regain it. What mechanisms give rise to consciousness, and how can we explain consciousness in the realm of the physical world of atoms and matter? For centuries, philosophers and scientists have aimed to crack this mystery. Much progress has been made in the past decades to understand how consciousness is instantiated in the brain, yet critical questions remain: can we develop a consciousness meter? Are computers conscious? What about other animals and babies? We have embarked in a large-scale, multicenter project to test, in the context of an open science, adversarial collaboration, two of the most prominent theories: Integrated information theory (IIT) and Global Neuronal Workspace (GNW) theory. We are collecting over 500 datasets including invasive and non-invasive recordings of the human brain, i.e.. fMRI, MEG and ECoG. We hope this project will enable theory-driven discoveries and further explorations that will help us better understand how consciousness fits inside the human brain.
Information and Decision-Making
In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.
What should a neuron aim for? Designing local objective functions based on information theory
Bernstein Conference 2024
Deciphering the dynamics of memory encoding and recall in the hippocampus using two-photon calcium imaging and information theory
FENS Forum 2024