Theory
theory
Dr. Gabriele Scheler
We are offering a research stipend to investigate theories of memorization in neural plasticity. The focus is a critical evaluation of the role of LTP/LTD and synaptic plasticity in memory. This position is virtual and could be done part-time, or full-time for three months. The ideal candidate should have solid knowledge of neurobiology, especially plasticity mechanisms, excellent communication skills, interest and enthusiasm for next-generation neural theories, a good understanding of computation and mathematics. Programming skills are not required for this position. Detailed knowledge of one area of neural plasticity, such as synapses, intracellular pathways or genetics, is expected. Further information available on request.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Neural mechanisms of optimal performance
When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.
A Novel Neurophysiological Approach to Assessing Distractibility within the General Population
Vulnerability to distraction varies across the general population and significantly affects one’s capacity to stay focused on and successfully complete the task at hand, whether at school, on the road, or at work. In this talk, I will begin by discussing how distractibility is typically assessed in the literature and introduce our innovative ERP approach to measuring it. Since distractibility is a cardinal symptom of ADHD, I will introduce its most widely used paper-and-pencil screening tool for the general population as external validation. Following that, I will present the Load Theory of Attention and explain how we used perceptual load to test the reliability of our neural marker of distractibility. Finally, I will highlight potential future applications of this marker in clinical and educational settings.
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection
I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.
Consciousness: From theory to practice
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory
Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM
Degrees of Consciousness
In the science of consciousness, it’s often assumed that some creatures (or mental states) are more conscious than others. But a number of philosophers have argued that the notion of degrees of consciousness is conceptually confused. I'll (1) argue that the most prominent objections to degrees of consciousness are unsustainable, and (2) develop an analysis of degrees of consciousness. On my view, whether consciousness comes in degrees ultimately depends on which theory of consciousness turns out to be correct. But I'll also argue that most theories of consciousness entail that consciousness comes in degrees.
Piecing together the puzzle of emotional consciousness
Conscious emotional experiences are very rich in their nature, and can encompass anything ranging from the most intense panic when facing immediate threat, to the overwhelming love felt when meeting your newborn. It is then no surprise that capturing all aspects of emotional consciousness, such as intensity, valence, and bodily responses, into one theory has become the topic of much debate. Key questions in the field concern how we can actually measure emotions and which type of experiments can help us distill the neural correlates of emotional consciousness. In this talk I will give a brief overview of theories of emotional consciousness and where they disagree, after which I will dive into the evidence proposed to support these theories. Along the way I will discuss to what extent studying emotional consciousness is ‘special’ and will suggest several tools and experimental contrasts we have at our disposal to further our understanding on this intriguing topic.
Connectome-based models of neurodegenerative disease
Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
A recurrent network model of planning predicts hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Decoding mental conflict between reward and curiosity in decision-making
Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.
From pecking order to ketamine - neural mechanism of social and emotional behavior
Emotions and social interactions color our lives and shape our behaviors. Using animal models and engineered manipulations, we aim to understand how social and emotional behaviors are encoded in the brain, focusing on the neural circuits underlying dominance hierarchy and depression. This lecture will highlight our recent discoveries on how downward social mobility leads to depression; how ketamine tames depression by blocking burst firing in the brain’s antireward center; and, how glia-neuron interaction plays a surprising role in this process. I will also present our recent work on the mechanism underlying the sustained antidepressant activity of ketamine and its brain region specificity. With these results, we hope to illuminate on a more unified theory on ketamine’s mode of action and inspire new treatment strategies for depression.
From pecking order to ketamine - neural mechanism of social and emotional behavior
Emotions and social interactions color our lives and shape our behaviors. Using animal models and engineered manipulations, we aim to understand how social and emotional behaviors are encoded in the brain, focusing on the neural circuits underlying dominance hierarchy and depression. This lecture will highlight our recent discoveries on how downward social mobility leads to depression; how ketamine tames depression by blocking burst firing in the brain’s antireward center; and, how glia-neuron interaction plays a surprising role in this process. I will also present our recent work on the mechanism underlying the sustained antidepressant activity of ketamine and its brain region specificity. With these results, we hope to illuminate on a more unified theory on ketamine’s mode of action and inspire new treatment strategies for depression.
Studies on the role of relevance appraisal in affect elicitation
A fundamental question in affective sciences is how the human mind decides if, and in what intensity, to elicit an affective response. Appraisal theories assume that preceding the affective response, there is an evaluation stage in which dimensions of an event are being appraised. Common to most appraisal theories is the assumption that the evaluation phase involves the assessment of the stimulus’ relevance to the perceiver’s well-being. In this talk, I first discuss conceptual and methodological challenges in investigating relevance appraisal. Next, I present two lines of experiments that ask how the human mind uses information about objective and subjective probabilities in the decision about the intensity of the emotional response and how these are affected by the valence of the event. The potential contribution of the results to appraisal theory is discussed.
Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
A recurrent network model of planning explains hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.
Immunosuppression for Parkinson's disease - a new therapeutic strategy?
Caroline Williams-Gray is a Principal Research Associate in the Department of Clinical Neurosciences, University of Cambridge, and an honorary consultant neurologist specializing in Parkinson’s disease and movement disorders. She leads a translational research group investigating the clinical and biological heterogeneity of PD, with the ultimate goal of developing more targeted therapies for different Parkinson’s subtypes. Her recent work has focused on the theory that the immune system plays a significant role in mediating the heterogeneity of PD and its progression. Her lab is investigating this using blood and CSF -based immune markers, PET neuroimaging and neuropathology in stratified PD cohorts; and she is leading the first randomized controlled trial repurposing a peripheral immunosuppressive drug (azathioprine) to slow the progression of PD.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Quasicriticality and the quest for a framework of neuronal dynamics
Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.
Spatial matching tasks for insect minds: relational similarity in bumblebees
Understanding what makes human unique is a fundamental research drive for comparative psychologists. Cognitive abilities such as theory of mind, cooperation or mental time travel have been considered uniquely human. Despite empirical evidence showing that animals other than humans are able (to some extent) of these cognitive achievements, findings are still heavily contested. In this context, being able to abstract relations of similarity has also been considered one of the hallmarks of human cognition. While previous research has shown that other animals (e.g., primates) can attend to relational similarity, less is known about what invertebrates can do. In this talk, I will present a series of spatial matching tasks that previously were used with children and great apes and that I adapted for use with wild-caught bumblebees. The findings from these studies suggest striking similarities between vertebrates and invertebrates in their abilities to attend to relational similarity.
The strongly recurrent regime of cortical networks
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.
Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome
Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Autopoiesis and Enaction in the Game of Life
Enaction plays a central role in the broader fabric of so-called 4E (embodied, embedded, extended, enactive) cognition. Although the origin of the enactive approach is widely dated to the 1991 publication of the book "The Embodied Mind" by Varela, Thompson and Rosch, many of the central ideas trace to much earlier work. Over 40 years ago, the Chilean biologists Humberto Maturana and Francisco Varela put forward the notion of autopoiesis as a way to understand living systems and the phenomena that they generate, including cognition. Varela and others subsequently extended this framework to an enactive approach that places biological autonomy at the foundation of situated and embodied behavior and cognition. I will describe an attempt to place Maturana and Varela's original ideas on a firmer foundation by studying them within the context of a toy model universe, John Conway's Game of Life (GoL) cellular automata. This work has both pedagogical and theoretical goals. Simple concrete models provide an excellent vehicle for introducing some of the core concepts of autopoiesis and enaction and explaining how these concepts fit together into a broader whole. In addition, a careful analysis of such toy models can hone our intuitions about these concepts, probe their strengths and weaknesses, and move the entire enterprise in the direction of a more mathematically rigorous theory. In particular, I will identify the primitive processes that can occur in GoL, show how these can be linked together into mutually-supporting networks that underlie persistent bounded entities, map the responses of such entities to environmental perturbations, and investigate the paths of mutual perturbation that these entities and their environments can undergo.
Integrative Neuromodulation: from biomarker identification to optimizing neuromodulation
Why do we make decisions impulsively blinded in an emotionally rash moment? Or caught in the same repetitive suboptimal loop, avoiding fears or rushing headlong towards illusory rewards? These cognitive constructs underlying self-control and compulsive behaviours and their influence by emotion or incentives are relevant dimensionally across healthy individuals and hijacked across disorders of addiction, compulsivity and mood. My lab focuses on identifying theory-driven modifiable biomarkers focusing on these cognitive constructs with the ultimate goal to optimize and develop novel means of neuromodulation. Here I will provide a few examples of my group’s recent work to illustrate this approach. I describe a series of recent studies on intracranial physiology and acute stimulation focusing on risk taking and emotional processing. This talk highlights the subthalamic nucleus, a common target for deep brain stimulation for Parkinson’s disease and obsessive-compulsive disorder. I further describe recent translational work in non-invasive neuromodulation. Together these examples illustrate the approach of the lab highlighting modifiable biomarkers and optimizing neuromodulation.
A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures
The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
Applying Structural Alignment theory to Early Verb Learning
Learning verbs is difficult and critical to learning one's native language. Children appear to benefit from seeing multiple events and comparing them to each other, and structural alignment theory provides a good theoretical framework to guide research into how preschool children may be comparing events as they learn new verbs. The talk will include 6 studies of early verb learning that make use of eye-tracking procedures as well as other behavioral (pointing) procedures, and that test key predictions from SA theory including the prediction that seeing similar examples before more varied examples helps observers learn how to compare (progressive alignment) and the prediction that when events have very low alignability with other events, that is one cue that the events should be ignored. Whether or how statistical learning may also be at work will be considered.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Extracting computational mechanisms from neural data using low-rank RNNs
An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.
Motor contribution to auditory temporal predictions
Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.
Dynamical System Theory and Mean Field Approximation
Talk & Tutorial
Modelling metaphor comprehension as a form of analogizing
What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
Mapping learning and decision-making algorithms onto brain circuitry
In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.
On the link between conscious function and general intelligence in humans and machines
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.
Universal function approximation in balanced spiking networks through convex-concave boundary composition
The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.
No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit
Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.
Social Curiosity
In this lecture, I would like to share with the broad audience the empirical results gathered and the theoretical advancements made in the framework of the Lendület project entitled ’The cognitive basis of human sociality’. The main objective of this project was to understand the mechanisms that enable the unique sociality of humans, from the angle of cognitive science. In my talk, I will focus on recent empirical evidence in the study of three fundamental social cognitive functions (social categorization, theory of mind and social learning; mainly from the empirical lenses of developmental psychology) in order to outline a theory that emphasizes the need to consider their interconnectedness. The proposal is that the ability to represent the social world along categories and the capacity to read others’ minds are used in an integrated way to efficiently assess the epistemic states of fellow humans by creating a shared representational space. The emergence of this shared representational space is both the result of and a prerequisite to efficient learning about the physical and social environment.
Hidden nature of seizures
How seizures emerge from the abnormal dynamics of neural networks within the epileptogenic tissue remains an enigma. Are seizures random events, or do detectable changes in brain dynamics precede them? Are mechanisms of seizure emergence identical at the onset and later stages of epilepsy? Is the risk of seizure occurrence stable, or does it change over time? A myriad of questions about seizure genesis remains to be answered to understand the core principles governing seizure genesis. The last decade has brought unprecedented insights into the complex nature of seizure emergence. It is now believed that seizure onset represents the product of the interactions between the process of a transition to seizure, long-term fluctuations in seizure susceptibility, epileptogenesis, and disease progression. During the lecture, we will review the latest observations about mechanisms of ictogenesis operating at multiple temporal scales. We will show how the latest observations contribute to the formation of a comprehensive theory of seizure genesis, and challenge the traditional perspectives on ictogenesis. Finally, we will discuss how combining conventional approaches with computational modeling, modern techniques of in vivo imaging, and genetic manipulation open prospects for exploration of yet hidden mechanisms of seizure genesis.
Is Theory of Mind Analogical? Evidence from the Analogical Theory of Mind cognitive model
Theory of mind, which consists of reasoning about the knowledge, belief, desire, and similar mental states of others, is a key component of social reasoning and social interaction. While it has been studied by cognitive scientists for decades, none of the prevailing theories of the processes that underlie theory of mind reasoning and development explain the breadth of experimental findings. I propose that this is because theory of mind is, like much of human reasoning, inherently analogical. In this talk, I will discuss several theory of mind findings from the psychology literature, the challenges they pose for our understanding of theory of mind, and bring in evidence from the Analogical Theory of Mind (AToM) cognitive model that demonstrates how these findings fit into an analogical understanding of theory of mind reasoning.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Theories of consciousness: beyond the first/higher-order distinction
Theories of consciousness are commonly grouped into "first-order" and "higher-order" families. As conventional wisdom has it, many more animals are likely to be conscious if a first-order theory is correct. But two recent developments have put pressure on the first/higher-order distinction. One is the argument (from Shea and Frith) that an effective global workspace mechanism must involve a form of metacognition. The second is Lau's "perceptual reality monitoring" (PRM) theory, a member of the "higher-order" family in which conscious sensory content is not re-represented, only tagged with a temporal index and marked as reliable. I argue that the first/higher-order distinction has become so blurred that it is no longer particularly useful. Moreover, the conventional wisdom about animals should not be trusted. It could be, for example, that the distribution of PRM in the animal kingdom is wider than the distribution of global broadcasting.
Threshold-Linear Networks as a Ground-Zero Theory for Spiking Models
Bernstein Conference 2024
Using Dynamical Systems Theory to Improve Temporal Credit Assignment in Spiking Neural Networks
Bernstein Conference 2024
What should a neuron aim for? Designing local objective functions based on information theory
Bernstein Conference 2024
An Analytical Theory of Curriculum Learning
COSYNE 2022
A distributional Bayesian learning theory for visual perceptual learning
COSYNE 2022
Identifying key structural connections from functional response data: theory & applications
COSYNE 2022
Identifying key structural connections from functional response data: theory & applications
COSYNE 2022
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
COSYNE 2022
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
COSYNE 2022
A Theory of Coupled Neuronal-Synaptic Dynamics
COSYNE 2022
A Theory of Coupled Neuronal-Synaptic Dynamics
COSYNE 2022
AI-driven cholinergic theory enables rapid and robust cortex-wide learning
COSYNE 2023
A complementary systems theory of meta-learning
COSYNE 2023
A normative theory of aggression
COSYNE 2023
A time-resolved theory of information encoding in recurrent neural networks
COSYNE 2023
An Analytical Theory of Cognitive Control of Learning
COSYNE 2025
Approximate continuous attractor theory
COSYNE 2025
Robust unsupervised learning of spike patterns with optimal transport theory
COSYNE 2025
A Statistical Theory of Sequence Compression in Human Memory
COSYNE 2025
A theory of multi-task computation and task selection
COSYNE 2025
A theory of rapid behavioral inference under the pressure of time
COSYNE 2025
Unifying reward and error-driven learning: a theory of cerebello-basal ganglia interactions
COSYNE 2025
A unifying theory of hippocampal remapping through the lens of contextual inference
COSYNE 2025
Cortical reinstatement: A direct approach to testing hippocampal indexing theory
FENS Forum 2024
Deciphering the dynamics of memory encoding and recall in the hippocampus using two-photon calcium imaging and information theory
FENS Forum 2024
Decoding of fMRI resting-state using task-based MVPA supports the Incentive-Sensitization Theory in smokers
FENS Forum 2024
Development of the whole-brain functional connectome explored via graph theory analysis
FENS Forum 2024
GPT-4 can recognize Theory of Mind in natural conversations: fMRI evidence
FENS Forum 2024
Towards a general brain theory: How does the physically active neuronal network “paradoxically” process information in a biologically or socially appropriate way?
FENS Forum 2024
A general theory of Hebbian sequence learning
Neuromatch 5
Review of applications of graph theory and network neuroscience in the development of artificial neural networks
Neuromatch 5
Theory of phase coding in recurrent neural networks
Neuromatch 5