information theory
Latest
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
Integrating theory-guided and data-driven approaches for measuring consciousness
Clinical assessment of consciousness is a significant issue, with recent research suggesting some brain-damaged patients who are assessed as unconscious are in fact conscious. Misdiagnosis of consciousness can also be detrimental when it comes to general anaesthesia, causing numerous psychological problems, including post-traumatic stress disorder. Avoiding awareness with overdose of anaesthetics, however, can also lead to cognitive impairment. Currently available objective assessment of consciousness is limited in accuracy or requires expensive equipment with major barriers to translation. In this talk, we will outline our recent theory-guided and data-driven approaches to develop new, optimized consciousness measures that will be robustly evaluated on an unprecedented breadth of high-quality neural data, recorded from the fly model system. We will overcome the subjective-choice problem in data-driven and theory-guided approaches with a comprehensive data analytic framework, which has never been applied to consciousness detection, integrating previously disconnected streams of research in consciousness detection to accelerate the translation of objective consciousness measures into clinical settings.
Inferring informational structures in neural recordings of drosophila with epsilon-machines
Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.
Through the bottleneck: my adventures with the 'Tishby program'
One of Tali's cherished goals was to transform biology into physics. In his view, biologists were far too enamored by the details of the specific models they studied, losing sight of the big principles that may govern the behavior of these models. One such big principle that he suggested was the 'information bottleneck (IB) principle'. The iIB principle is an information-theoretical approach for extracting the relevant information that one random variable carries about another. Tali applied the IB principle to numerous problems in biology, gaining important insights in the process. Here I will describe two applications of the IB principle to neurobiological data. The first is the formalization of the notion of surprise that allowed us to rigorously estimate the memory duration and content of neuronal responses in auditory cortex, and the second is an application to behavior, allowing us to estimate 'optimal policies under information constraints' that shed interesting light on rat behavior.
Information Dynamics in the Hippocampus and Cortex and their alterations in epilepsy
Neurological disorders share common high-level alterations, such as cognitive deficits, anxiety, and depression. This raises the possibility of fundamental alterations in the way information conveyed by neural firing is maintained and dispatched in the diseased brain. Using experimental epilepsy as a model of neurological disorder we tested the hypothesis of altered information processing, analyzing how neurons in the hippocampus and the entorhinal cortex store and exchange information during slow and theta oscillations. We equate the storage and sharing of information to low level, or primitive, information processing at the algorithmic level, the theoretical intermediate level between structure and function. We find that these low-level processes are organized into substates during brain states marked by theta and slow oscillations. Their internal composition and organization through time are disrupted in epilepsy, losing brain state-specificity, and shifting towards a regime of disorder in a brain region dependent manner. We propose that the alteration of information processing at an algorithmic level may be a mechanism behind the emergent and widespread co-morbidities associated with epilepsy, and perhaps other disorders.
In search of me: a theoretical approach to identify the neural substrate of consciousness
A major neuroscientific challenge is to identify the neural mechanisms that support consciousness. Though experimental studies have accumulated evidence about the location of the neural substrate of consciousness, we still lack a full understanding of why certain brain areas, but not others, can support consciousness. In this talk, I will give an overview of our approach, taking advantage of the theoretical framework provided by Integrated Information Theory (IIT). First, I will introduce results showing that a maximum of integrated information within the human brain matches our best evidence concerning the location of the NSC, supporting the IIT’s prediction. Furthermore, I will discuss the possibility that the NSC can change its location and even split into two depending on the task demand. Finally, based on some graph-theoretical analyses, I will argue that the ability of different brain regions to contribute or not to consciousness depends on specific properties of their anatomical connectivity, which determines their ability to support high integrated information.
Integrated Information Theory and Its Implications for Free Will
Integrated information theory (IIT) takes as its starting point phenomenology, rather than behavioral, functional, or neural correlates of consciousness. The theory characterizes the essential properties of phenomenal existence—which is immediate and indubitable. These are translated into physical properties, expressed operationally as cause-effect power, which must be satisfied by the neural substrate of consciousness. On this basis, the theory can account for clinical and experimental data about the presence and absence of consciousness. Current work aims at accounting for specific qualities of different experiences, such as spatial extendedness and the flow of time. Several implications of IIT have ethical relevance. One is that functional equivalence does not imply phenomenal equivalence—computers may one day be able to do everything we do, but they will not experience anything. Another is that we do have free will in the fundamental, metaphysical sense—we have true alternatives and we, not our neurons, are the true cause of our willed actions.
Reading out responses of large neural population with minimal information loss
Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signal communication between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.
The 3 Cs: Collaborating to Crack Consciousness
Every day when we fall asleep we lose consciousness, we are not there. And then, every morning, when we wake up, we regain it. What mechanisms give rise to consciousness, and how can we explain consciousness in the realm of the physical world of atoms and matter? For centuries, philosophers and scientists have aimed to crack this mystery. Much progress has been made in the past decades to understand how consciousness is instantiated in the brain, yet critical questions remain: can we develop a consciousness meter? Are computers conscious? What about other animals and babies? We have embarked in a large-scale, multicenter project to test, in the context of an open science, adversarial collaboration, two of the most prominent theories: Integrated information theory (IIT) and Global Neuronal Workspace (GNW) theory. We are collecting over 500 datasets including invasive and non-invasive recordings of the human brain, i.e.. fMRI, MEG and ECoG. We hope this project will enable theory-driven discoveries and further explorations that will help us better understand how consciousness fits inside the human brain.
Information and Decision-Making
In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.
What should a neuron aim for? Designing local objective functions based on information theory
Bernstein Conference 2024
Deciphering the dynamics of memory encoding and recall in the hippocampus using two-photon calcium imaging and information theory
FENS Forum 2024
information theory coverage
12 items