TopicNeuro

predictions

50 Seminars12 ePosters

Latest

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscienceRecording

Predictive processing: a circuit approach to psychosis

Georg Keller
Friedrich Miescher Institute for Biomedical Research, Basel
Mar 14, 2024

Predictive processing is a computational framework that aims to explain how the brain processes sensory information by making predictions about the environment and minimizing prediction errors. It can also be used to explain some of the key symptoms of psychotic disorders such as schizophrenia. In my talk, I will provide an overview of our progress in this endeavor.

SeminarNeuroscienceRecording

Bayesian expectation in the perception of the timing of stimulus sequences

Max De Luca
University of Birmingham
Dec 13, 2023

In the current virtual journal club Dr Di Luca will present findings from a series of psychophysical investigations where he measured sensitivity and bias in the perception of the timing of stimuli. He will present how improved detection with longer sequences and biases in reporting isochrony can be accounted for by optimal statistical predictions. Among his findings was also that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted to appear more regular. Such change depends on whether the context these sequences are presented is also regular. Dr Di Luca will present a Bayesian model for the combination of dynamically updated expectations, in the form of a priori probability, with incoming sensory information. These findings contribute to the understanding of how the brain processes temporal information to shape perceptual experiences.

SeminarNeuroscience

Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time

Harel Shouval
The University of Texas at Houston
Jun 14, 2023

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

SeminarNeuroscience

Richly structured reward predictions in dopaminergic learning circuits

Angela J. Langdon
National Institute of Mental Health at National Institutes of Health (NIH)
May 17, 2023

Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 3, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarNeuroscienceRecording

The sense of agency as an explorative role in our perception and action

Wen Wen
The University of Tokyo
Apr 18, 2023

The sense of agency refers to the subjective feeling of controlling one's own behavior and, through them, external events. Why is this subjective feeling important for humans? Is it just a by-product of our actions? Previous studies have shown that the sense of agency can affect the intensity of sensory input because we predict the input from our motor intention. However, my research has found that the sense of agency plays more roles than just predictions. It enhances perceptual processes of sensory input and potentially helps to harvest more information about the link between the external world and the self. Furthermore, our recent research found both indirect and direct evidence that the sense of agency is important for people's exploratory behaviors, and this may be linked to proximal exploitations of one's control in the environment. In this talk, I will also introduce the paradigms we use to study the sense of agency as a result of perceptual processes, and our findings of individual differences in this sense and the implications.

SeminarNeuroscience

Relations and Predictions in Brains and Machines

Kim Stachenfeld
Deepmind
Apr 7, 2023

Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.

SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 29, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 22, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscienceRecording

Neuronal oscillations and prediction in perception

Concetta Morrone
University of Pisa
Mar 14, 2023
SeminarNeuroscience

Central-peripheral dichotomy in vision: its motivation and predictions (such as in visual illusions)

Zhaoping Li
Mar 11, 2023
SeminarNeuroscienceRecording

Sampling the environment with body-brain rhythms

Antonio Criscuolo
Maastricht University
Jan 25, 2023

Since Darwin, comparative research has shown that most animals share basic timing capacities, such as the ability to process temporal regularities and produce rhythmic behaviors. What seems to be more exclusive, however, are the capacities to generate temporal predictions and to display anticipatory behavior at salient time points. These abilities are associated with subcortical structures like basal ganglia (BG) and cerebellum (CE), which are more developed in humans as compared to nonhuman animals. In the first research line, we investigated the basic capacities to extract temporal regularities from the acoustic environment and produce temporal predictions. We did so by adopting a comparative and translational approach, thus making use of a unique EEG dataset including 2 macaque monkeys, 20 healthy young, 11 healthy old participants and 22 stroke patients, 11 with focal lesions in the BG and 11 in the CE. In the second research line, we holistically explore the functional relevance of body-brain physiological interactions in human behavior. Thus, a series of planned studies investigate the functional mechanisms by which body signals (e.g., respiratory and cardiac rhythms) interact with and modulate neurocognitive functions from rest and sleep states to action and perception. This project supports the effort towards individual profiling: are individuals’ timing capacities (e.g., rhythm perception and production), and general behavior (e.g., individual walking and speaking rates) influenced / shaped by body-brain interactions?

SeminarNeuroscienceRecording

Motor contribution to auditory temporal predictions

Benjamin Morillon
Aix Marseille Univ, Inserm, INS, Institut de Neurosciences des Systèmes
Dec 14, 2022

Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.

SeminarNeuroscienceRecording

No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

Rylan Schaeffer
Fiete lab, MIT
Nov 2, 2022

Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.

SeminarNeuroscienceRecording

Trial by trial predictions of subjective time from human brain activity

Maxine Sherman
University of Sussex, UK
Oct 26, 2022

Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time

SeminarNeuroscienceRecording

Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations

Yul Kang
Lengyel lab, Cambridge University
Apr 6, 2022

To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.

SeminarNeuroscienceRecording

Probabilistic computation in natural vision

Ruben Coen-Cagli
Albert Einstein College of Medicine
Mar 30, 2022

A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.

SeminarNeuroscience

Predictions, Perception, and Psychosis

Philipp Sterzer
Charite
Mar 29, 2022
SeminarNeuroscienceRecording

Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach

Elisa Felsche
Max Planck Institute for Evolutionary Anthropology
Mar 10, 2022

Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.

SeminarNeuroscienceRecording

Parametric control of flexible timing through low-dimensional neural manifolds

Manuel Beiran
Center for Theoretical Neuroscience, Columbia University & Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 9, 2022

Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.

SeminarNeuroscienceRecording

Neural correlates of temporal processing in humans

Andre M. Cravo
Center for Mathematics, Computing and Cognition, Federal University of ABC
Jan 26, 2022

Estimating intervals is essential for adaptive behavior and decision-making. Although several theoretical models have been proposed to explain how the brain keeps track of time, there is still no evidence toward a single one. It is often hard to compare different models due to their overlap in behavioral predictions. For this reason, several studies have looked for neural signatures of temporal processing using methods such as electrophysiological recordings (EEG). However, for this strategy to work, it is essential to have consistent EEG markers of temporal processing. In this talk, I'll present results from several studies investigating how temporal information is encoded in the EEG signal. Specifically, across different experiments, we have investigated whether different neural signatures of temporal processing (such as the CNV, the LPC, and early ERPs): 1. Depend on the task to be executed (whether or not it is a temporal task or different types of temporal tasks); 2. Are encoding the physical duration of an interval or how much longer/shorter an interval is relative to a reference. Lastly, I will discuss how these results are consistent with recent proposals that approximate temporal processing with decisional models.

SeminarNeuroscienceRecording

NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks

Abdullahi Ali
Donders Institute for Brain
Dec 2, 2021

Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.

SeminarNeuroscienceRecording

NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex

Aria Wang
Carnegie Mellon University
Dec 1, 2021

To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.

SeminarNeuroscienceRecording

NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network

Francesca Castaldo
University College London
Dec 1, 2021

The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.

SeminarNeuroscience

The bounded rationality of probability distortion

Laurence T Maloney
NYU
Nov 10, 2021

In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.

SeminarNeuroscience

Representation transfer and signal denoising through topographic modularity

Barna Zajzon
Morrison lab, Forschungszentrum Jülich, Germany
Nov 4, 2021

To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

SeminarNeuroscienceRecording

Self-organized formation of discrete grid cell modules from smooth gradients

Sarthak Chandra
Fiete lab, MIT
Nov 3, 2021

Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.

SeminarNeuroscience

Speak your mind: cortical predictions of speech sensory feedback

Caroline Niziolek
University of Wisconsin, USA
Oct 21, 2021
SeminarNeuroscience

- CANCELLED -

Selina Solomon
Kohn lab, Albert Einstein College of Medicine; Growth Intelligence, UK
Oct 20, 2021

A recent formulation of predictive coding theory proposes that a subset of neurons in each cortical area encodes sensory prediction errors, the difference between predictions relayed from higher cortex and the sensory input. Here, we test for evidence of prediction error responses in spiking responses and local field potentials (LFP) recorded in primary visual cortex and area V4 of macaque monkeys, and in complementary electroencephalographic (EEG) scalp recordings in human participants. We presented a fixed sequence of visual stimuli on most trials, and violated the expected ordering on a small subset of trials. Under predictive coding theory, pattern-violating stimuli should trigger robust prediction errors, but we found that spiking, LFP and EEG responses to expected and pattern-violating stimuli were nearly identical. Our results challenge the assertion that a fundamental computational motif in sensory cortex is to signal prediction errors, at least those based on predictions derived from temporal patterns of visual stimulation.

SeminarNeuroscience

Demystifying the richness of visual perception

Ruth Rosenholtz
MIT
Oct 20, 2021

Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.

ePosterNeuroscience

The influence of the membrane potential on inhibitory regulation of plasticity predictions and learned representations

Patricia Rubisch, Melanie Stefan, Matthias Hennig

Bernstein Conference 2024

ePosterNeuroscience

Task choice influences single-neuron tuning predictions in connectome-constrained modeling

Felix Pei, Janne Lappalainen, Srinivas Turaga, Jakob Macke

Bernstein Conference 2024

ePosterNeuroscience

Linking tonic dopamine and biased value predictions in a biologically inspired reinforcement learning model

Sandra Romero Pinto,Naoshige Uchida

COSYNE 2022

ePosterNeuroscience

Linking tonic dopamine and biased value predictions in a biologically inspired reinforcement learning model

Sandra Romero Pinto,Naoshige Uchida

COSYNE 2022

ePosterNeuroscience

Theories of surprise: definitions and predictions

Alireza Modirshanechi,Johanni Brea,Wulfram Gerstner

COSYNE 2022

ePosterNeuroscience

Theories of surprise: definitions and predictions

Alireza Modirshanechi,Johanni Brea,Wulfram Gerstner

COSYNE 2022

ePosterNeuroscience

Blazed oblique plane microscopy reveals scale-invariant predictions of brain-wide activity

Maximilian Hoffmann, Jörg Henninger, Johannes Veith, Lars Richter, Benjamin Judkewitz

COSYNE 2023

ePosterNeuroscience

Sensory predictions are embedded in cortical motor activity

Jonathan A Michaels, Mehrdad Kashefi, Jack Zheng, Olivier Codol, Jeff Weiler, Andrew Pruszynski

COSYNE 2023

ePosterNeuroscience

Semi-blind machine learning for fMRI-based predictions of intelligence

Gabriele Lohmann, Samuel Heczko, Lucas Mahler, Qi Wang, Vinod J. Kumar, Michelle Roost, Juergen Jost, Klaus Scheffler

FENS Forum 2024

ePosterNeuroscience

Unmet emotional predictions linger in the lateral orbitofrontal cortex during rest

Gargi Majumdar, Fahd Yazin, Arpan Banerjee, Dipanjan Roy

FENS Forum 2024

ePosterNeuroscience

Visuo-motor sequential predictions guiding the representation of visual objects

Anastasia Dimakou, Ilaria Mazzonetto, Andrea Zangrossi, Miriam Celli, Giovanni Pezzulo, Maurizio Corbetta

FENS Forum 2024

ePosterNeuroscience

‘What a Mistake!’: Prediction error modulates explicit and visuomotor predictions in virtual reality

Yonatan Stern

Neuromatch 5

predictions coverage

62 items

Seminar50
ePoster12
Domain spotlight

Explore how predictions research is advancing inside Neuro.

Visit domain