← Back

Prediction

Topic spotlight
TopicWorld Wide

prediction

Discover seminars, jobs, and research tagged with prediction across World Wide.
101 curated items60 Seminars40 ePosters1 Position
Updated in 12 days
101 items · prediction
101 results
SeminarNeuroscience

sensorimotor control, mouvement, touch, EEG

Marieva Vlachou
Institut des Sciences du Mouvement Etienne Jules Marey, Aix-Marseille Université/CNRS, France
Dec 18, 2025

Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
Jul 8, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 11, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
May 13, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades

Andrej Bicanski
Max Planck Institute for Human Cognitive and Brain Sciences
Mar 11, 2025

How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.

SeminarNeuroscience

Brain circuits for spatial navigation

Ann Hermundstad, Ila Fiete, Barbara Webb
Janelia Research Campus; MIT; University of Edinburgh
Nov 28, 2024

In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 28, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Decomposing motivation into value and salience

Philippe Tobler
University of Zurich
Oct 31, 2024

Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.

SeminarNeuroscience

Visuomotor learning of location, action, and prediction

Markus Lappe
University of Muenster
Jun 17, 2024
SeminarNeuroscience

Modelling the fruit fly brain and body

Srinivas Turaga
HHMI | Janelia
May 14, 2024

Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.

SeminarNeuroscience

Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
Apr 11, 2024

There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

SeminarNeuroscienceRecording

Bayesian expectation in the perception of the timing of stimulus sequences

Max De Luca
University of Birmingham
Dec 12, 2023

In the current virtual journal club Dr Di Luca will present findings from a series of psychophysical investigations where he measured sensitivity and bias in the perception of the timing of stimuli. He will present how improved detection with longer sequences and biases in reporting isochrony can be accounted for by optimal statistical predictions. Among his findings was also that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted to appear more regular. Such change depends on whether the context these sequences are presented is also regular. Dr Di Luca will present a Bayesian model for the combination of dynamically updated expectations, in the form of a priori probability, with incoming sensory information. These findings contribute to the understanding of how the brain processes temporal information to shape perceptual experiences.

SeminarNeuroscience

Connectome-based models of neurodegenerative disease

Jacob Vogel
Lund University
Dec 4, 2023

Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.

SeminarNeuroscience

Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task

Albert Compte
IDIBAPS
Nov 16, 2023

Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.

SeminarNeuroscience

NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping

Andy Jahn
fMRI Lab, University of Michigan
Aug 31, 2023

We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.

SeminarNeuroscience

Algonauts 2023 winning paper journal club (fMRI encoding models)

Huzheng Yang, Paul Scotti
Aug 17, 2023

Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf

SeminarNeuroscience

1.8 billion regressions to predict fMRI (journal club)

Mihir Tripathy
Jul 27, 2023

Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.

SeminarNeuroscience

Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time

Harel Shouval
The University of Texas at Houston
Jun 13, 2023

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

SeminarNeuroscience

Computational models of spinal locomotor circuitry

Simon Danner
Drexel University, Philadelphia, USA
Jun 13, 2023

To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.

SeminarNeuroscience

How curiosity affects learning and information seeking via the dopaminergic circuit

Matthias J. Gruber
Cardiff University, UK
Jun 12, 2023

Over the last decade, research on curiosity – the desire to seek new information – has been rapidly growing. Several studies have shown that curiosity elicits activity within the dopaminergic circuit and thereby enhances hippocampus-dependent learning. However, given this new field of research, we do not have a good understanding yet of (i) how curiosity-based learning changes across the lifespan, (ii) why some people show better learning improvements due to curiosity than others, and (iii) whether lab-based research on curiosity translates to how curiosity affects information seeking in real life. In this talk, I will present a series of behavioural and neuroimaging studies that address these three questions about curiosity. First, I will present findings on how curiosity and interest affect learning differently in childhood and adolescence. Second, I will show data on how inter-individual differences in the magnitude of curiosity-based learning depend on the strength of resting-state functional connectivity within the cortico-mesolimbic dopaminergic circuit. Third, I will present findings on how the level of resting-state functional connectivity within this circuit is also associated with the frequency of real-life information seeking (i.e., about Covid-19-related news). Together, our findings help to refine our recently proposed framework – the Prediction, Appraisal, Curiosity, and Exploration (PACE) framework – that attempts to integrate theoretical ideas on the neurocognitive mechanisms of how curiosity is elicited, and how curiosity enhances learning and information seeking. Furthermore, our findings highlight the importance of curiosity research to better understand how curiosity can be harnessed to improve learning and information seeking in real life.

SeminarArtificial IntelligenceRecording

Diverse applications of artificial intelligence and mathematical approaches in ophthalmology

Tiarnán Keenan
National Eye Institute (NEI)
Jun 5, 2023

Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.

SeminarNeuroscience

Richly structured reward predictions in dopaminergic learning circuits

Angela J. Langdon
National Institute of Mental Health at National Institutes of Health (NIH)
May 16, 2023

Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 2, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarNeuroscienceRecording

The sense of agency as an explorative role in our perception and action

Wen Wen
The University of Tokyo
Apr 17, 2023

The sense of agency refers to the subjective feeling of controlling one's own behavior and, through them, external events. Why is this subjective feeling important for humans? Is it just a by-product of our actions? Previous studies have shown that the sense of agency can affect the intensity of sensory input because we predict the input from our motor intention. However, my research has found that the sense of agency plays more roles than just predictions. It enhances perceptual processes of sensory input and potentially helps to harvest more information about the link between the external world and the self. Furthermore, our recent research found both indirect and direct evidence that the sense of agency is important for people's exploratory behaviors, and this may be linked to proximal exploitations of one's control in the environment. In this talk, I will also introduce the paradigms we use to study the sense of agency as a result of perceptual processes, and our findings of individual differences in this sense and the implications.

SeminarNeuroscience

Relations and Predictions in Brains and Machines

Kim Stachenfeld
Deepmind
Apr 6, 2023

Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.

SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 28, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 21, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscienceRecording

Neuronal oscillations and prediction in perception

Concetta Morrone
University of Pisa
Mar 13, 2023
SeminarNeuroscience

Central-peripheral dichotomy in vision: its motivation and predictions (such as in visual illusions)

Zhaoping Li
Mar 10, 2023
SeminarNeuroscienceRecording

Verb metaphors are processed as analogies

Daniel King
Northwestern University
Mar 8, 2023

Metaphor is a pervasive phenomenon in language and cognition. To date, the vast majority of psycholinguistic research on metaphor has focused on noun-noun metaphors of the form An X is a Y (e.g., My job is a jail). Yet there is evidence that verb metaphor (e.g., I sailed through my exams) is more common. Despite this, comparatively little work has examined how verb metaphors are processed. In this talk, I will propose a novel account for verb metaphor comprehension: verb metaphors are understood in the same way that analogies are—as comparisons processed via structure-mapping. I will discuss the predictions that arise from applying the analogical framework to verb metaphor and present a series of experiments showing that verb metaphoric extension is consistent with those predictions.

SeminarNeuroscienceRecording

Implications of Vector-space models of Relational Concepts

Priya Kalra
Western University
Jan 25, 2023

Vector-space models are used frequently to compare similarity and dimensionality among entity concepts. What happens when we apply these models to relational concepts? What is the evidence that such models do apply to relational concepts? If we use such a model, then one implication is that maximizing surface feature variation should improve relational concept learning. For example, in STEM instruction, the effectiveness of teaching by analogy is often limited by students’ focus on superficial features of the source and target exemplars. However, in contrast to the prediction of the vector-space computational model, the strategy of progressive alignment (moving from perceptually similar to different targets) has been suggested to address this issue (Gentner & Hoyos, 2017), and human behavioral evidence has shown benefits from progressive alignment. Here I will present some preliminary data that supports the computational approach. Participants were explicitly instructed to match stimuli based on relations while perceptual similarity of stimuli varied parametrically. We found that lower perceptual similarity reduced accurate relational matching. This finding demonstrates that perceptual similarity may interfere with relational judgements, but also hints at why progressive alignment maybe effective. These are preliminary, exploratory data and I to hope receive feedback on the framework and to start a discussion in a group on the utility of vector-space models for relational concepts in general.

SeminarNeuroscienceRecording

Sampling the environment with body-brain rhythms

Antonio Criscuolo
Maastricht University
Jan 24, 2023

Since Darwin, comparative research has shown that most animals share basic timing capacities, such as the ability to process temporal regularities and produce rhythmic behaviors. What seems to be more exclusive, however, are the capacities to generate temporal predictions and to display anticipatory behavior at salient time points. These abilities are associated with subcortical structures like basal ganglia (BG) and cerebellum (CE), which are more developed in humans as compared to nonhuman animals. In the first research line, we investigated the basic capacities to extract temporal regularities from the acoustic environment and produce temporal predictions. We did so by adopting a comparative and translational approach, thus making use of a unique EEG dataset including 2 macaque monkeys, 20 healthy young, 11 healthy old participants and 22 stroke patients, 11 with focal lesions in the BG and 11 in the CE. In the second research line, we holistically explore the functional relevance of body-brain physiological interactions in human behavior. Thus, a series of planned studies investigate the functional mechanisms by which body signals (e.g., respiratory and cardiac rhythms) interact with and modulate neurocognitive functions from rest and sleep states to action and perception. This project supports the effort towards individual profiling: are individuals’ timing capacities (e.g., rhythm perception and production), and general behavior (e.g., individual walking and speaking rates) influenced / shaped by body-brain interactions?

SeminarNeuroscience

Extracting computational mechanisms from neural data using low-rank RNNs

Adrian Valente
Ecole Normale Supérieure
Jan 10, 2023

An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.

SeminarNeuroscience

A possible role of the posterior alpha as a railroad switcher between dorsal and ventral pathways

Liad Mudrik/Walter Sinnott-Armstrong/Ivano Triggiani/Nick Byrd
Jan 9, 2023

Suppose you are on your favorite touchscreen device consciously and deliberately deciding emails to read or delete. In other words, you are consciously and intentionally looking, tapping, and swiping. Now suppose that you are doing this while neuroscientists are recording your brain activity. Eventually, the neuroscientists are familiar enough with your brain activity and behavior that they run an experiment with subliminal cues which reveals that your looking, tapping, and swiping seem to be determined by a random switch in your brain. You are not aware of it, or its impact on your decisions or movements. Would these predictions undermine your sense of free will? Some have argued that it should. Although this inference from unreflective and/or random intention mechanisms to free will skepticism, may seem intuitive at first, there are already objections to it. So, even if this thought experiment is plausible, it may not actually undermine our sense of free will.

SeminarNeuroscienceRecording

Motor contribution to auditory temporal predictions

Benjamin Morillon
Aix Marseille Univ, Inserm, INS, Institut de Neurosciences des Systèmes
Dec 13, 2022

Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.

SeminarNeuroscience

Mapping learning and decision-making algorithms onto brain circuitry

Ilana Witten
Princeton
Nov 17, 2022

In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.

SeminarNeuroscienceRecording

Shallow networks run deep: How peripheral preprocessing facilitates odor classification

Yonatan Aljadeff
University of California, San Diego (UCSD)
Nov 8, 2022

Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli. Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu.

SeminarNeuroscienceRecording

No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

Rylan Schaeffer
Fiete lab, MIT
Nov 1, 2022

Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.

SeminarNeuroscienceRecording

The role of population structure in computations through neural dynamics

Alexis Dubreuil
French National Centre for Scientific Research (CNRS), Bordeaux
Nov 1, 2022

Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.

SeminarNeuroscienceRecording

Trial by trial predictions of subjective time from human brain activity

Maxine Sherman
University of Sussex, UK
Oct 25, 2022

Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time

SeminarNeuroscienceRecording

How People Form Beliefs

Tali Sharot
University College London
Oct 13, 2022

In this talk I will present our recent behavioural and neuroscience research on how the brain motivates itself to form particular beliefs and why it does so. I will propose that the utility of a belief is derived from the potential outcomes associated with holding it. Outcomes can be internal (e.g., positive/negative feelings) or external (e.g., material gain/loss), and only some are dependent on belief accuracy. We show that belief change occurs when the potential outcomes of holding it alters, for example when moving from a safe environment to a threatening environment. Our findings yield predictions about how belief formation alters as a function of mental health. We test these predictions using a linguistic analysis of participants’ web searches ‘in the wild’ to quantify the affective properties of information they consume and relate those to reported psychiatric symptoms. Finally, I will present a study in which we used our framework to alter the incentive structure of social media platforms to reduce the spread of misinformation and improve belief accuracy.

SeminarNeuroscienceRecording

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis
California Institute of Technology
Sep 6, 2022

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarPhysics of LifeRecording

Active mechanics of sea star oocytes

Peter Foster
Brandeis University
Jul 17, 2022

The cytoskeleton has the remarkable ability to self-organize into active materials which underlie diverse cellular processes ranging from motility to cell division. Actomyosin is a canonical example of an active material, which generates cellularscale contractility in part through the forces exerted by myosin motors on actin filaments. While the molecular players underlying actomyosin contractility have been well characterized, how cellular-scale deformation in disordered actomyosin networks emerges from filament-scale interactions is not well understood. In this talk, I’ll present work done in collaboration with Sebastian Fürthauer and Nikta Fakhri addressing this question in vivo using the meiotic surface contraction wave seen in oocytes of the bat star Patiria miniata as a model system. By perturbing actin polymerization, we find that the cellular deformation rate is a nonmonotonic function of cortical actin density peaked near the wild type density. To understand this, we develop an active fluid model coarse-grained from filament-scale interactions and find quantitative agreement with the measured data. The model makes further predictions, including the surprising prediction that deformation rate decreases with increasing motor concentration. We test these predictions through protein overexpression and find quantitative agreement. Taken together, this work is an important step for bridging the molecular and cellular length scales for cytoskeletal networks in vivo.

SeminarNeuroscienceRecording

From the Didactic to the Heuristic Use of Analogies in Science Teaching

Nikolaos Fotou
University of Lincoln
Jun 15, 2022

Extensive research on science teaching has shown the effectiveness of analogies as a didactic tool which, when appropriately and effectively used, facilitates the learning process of abstract concepts. This seminar does not contradict the efficacy of such a didactic use of analogies in this seminar but switches attention and interest on their heuristic use in approaching and understanding of what previously unknown. Such a use of analogies derives from research with 10 to 17 year-olds, who, when asked to make predictions in novel situations and to then provide explanations about these predictions, they self-generated analogies and used them by reasoning on their basis. This heuristic use of analogies can be used in science teaching in revealing how students approach situations they have not considered before as well as the sources they draw upon in doing so.

SeminarNeuroscience

Feedback controls what we see

Andreas Keller
Institute of Molecular and Clinical Ophthalmology Basel
May 29, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The visual system uses visual context to predict the content of the stimulus. What enables neurons in the visual system to respond to context when the stimulus is not available? In cortex, sensory processing is based on a combination of feedforward information arriving from sensory organs, and feedback information that originates in higher-order areas. Whereas feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the exciting discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas, in particular when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input.

SeminarNeuroscience

In pursuit of a universal, biomimetic iBCI decoder: Exploring the manifold representations of action in the motor cortex

Lee Miller
Northwestern University
May 19, 2022

My group pioneered the development of a novel intracortical brain computer interface (iBCI) that decodes muscle activity (EMG) from signals recorded in the motor cortex of animals. We use these synthetic EMG signals to control Functional Electrical Stimulation (FES), which causes the muscles to contract and thereby restores rudimentary voluntary control of the paralyzed limb. In the past few years, there has been much interest in the fact that information from the millions of neurons active during movement can be reduced to a small number of “latent” signals in a low-dimensional manifold computed from the multiple neuron recordings. These signals can be used to provide a stable prediction of the animal’s behavior over many months-long periods, and they may also provide the means to implement methods of transfer learning across individuals, an application that could be of particular importance for paralyzed human users. We have begun to examine the representation within this latent space, of a broad range of behaviors, including well-learned, stereotyped movements in the lab, and more natural movements in the animal’s home cage, meant to better represent a person’s daily activities. We intend to develop an FES-based iBCI that will restore voluntary movement across a broad range of motor tasks without need for intermittent recalibration. However, the nonlinearities and context dependence within this low-dimensional manifold present significant challenges.

SeminarNeuroscienceRecording

Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations​

Manu Halvagal​
Friedrich Miescher Institute for Biomedical Research
May 3, 2022

Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.

SeminarNeuroscienceRecording

The balance of excitation and inhibition and a canonical cortical computation

Yashar Ahmadian
Cambridge, UK
Apr 26, 2022

Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. The balanced network model provides a self-consistent account of this observation: population rates dynamically adjust to yield a state in which all neurons are active at biological levels, with their E & I inputs tightly balanced. But global tight E/I balance predicts population responses with linear stimulus-dependence and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which only a localized subset of neurons are active and have balanced inputs. We analytically show that in networks of neurons with different stimulus selectivities, the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), which predicts a regime of loose, rather than tight, E/I balance. However, an understanding of the causal relationship between E/I balance and normalization in SSN and conditions under which SSN yields significant sublinear integration are lacking. For weak inputs, SSN integrates inputs supralinearly, while for very strong inputs it approaches a regime of tight balance. We show that when this latter regime is globally balanced, SSN cannot exhibit strong normalization for any input strength; thus, in SSN too, significant normalization requires localized balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation. Time allowing I will also cover our work extending a normative theoretical account of normalization which explains it as an example of efficient coding of natural stimuli. We show that when biological noise is accounted for, this theory makes the same prediction as the SSN: a transition to supralinear integration for weak stimuli.

SeminarNeuroscienceRecording

Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations

Yul Kang
Lengyel lab, Cambridge University
Apr 5, 2022

To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.

SeminarNeuroscienceRecording

Population coding in the cerebellum: a machine learning perspective

Reza Shadmehr
Johns Hopkins School of Medicine
Apr 5, 2022

The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.

SeminarNeuroscienceRecording

Probabilistic computation in natural vision

Ruben Coen-Cagli
Albert Einstein College of Medicine
Mar 29, 2022

A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.

SeminarNeuroscience

Predictions, Perception, and Psychosis

Philipp Sterzer
Charite
Mar 28, 2022
ePoster

Adaptive probabilistic regression for real-time motor excitability state prediction from human EEG

Lisa Haxel, Jaivardhan Kapoor, Ulf Ziemann, Jakob Macke

Bernstein Conference 2024

ePoster

The influence of the membrane potential on inhibitory regulation of plasticity predictions and learned representations

Patricia Rubisch, Melanie Stefan, Matthias Hennig

Bernstein Conference 2024

ePoster

Task choice influences single-neuron tuning predictions in connectome-constrained modeling

Felix Pei, Janne Lappalainen, Srinivas Turaga, Jakob Macke

Bernstein Conference 2024

ePoster

Anterior cingulate cortex enables rapid set-shifting behaviour via prediction mismatch signalling

COSYNE 2022

ePoster

Counterfactual outcomes affect reward expectation and prediction errors in macaque frontal cortex

COSYNE 2022

ePoster

VTA dopamine neurons signal phasic and ramping reward prediction error in goal-directed navigation

COSYNE 2022

ePoster

Heterogeneous prediction-error circuits formed and shaped by homeostatic inhibitory plasticity

COSYNE 2022

ePoster

Heterogeneous prediction-error circuits formed and shaped by homeostatic inhibitory plasticity

COSYNE 2022

ePoster

High-level prediction signals cascade through the macaque face-processing hierarchy

COSYNE 2022

ePoster

High-level prediction signals cascade through the macaque face-processing hierarchy

COSYNE 2022

ePoster

The interplay between prediction and integration processes in human perception

COSYNE 2022

ePoster

The interplay between prediction and integration processes in human perception

COSYNE 2022

ePoster

Learning and expression of dopaminergic reward prediction error via plastic representations of time

COSYNE 2022

ePoster

Learning and expression of dopaminergic reward prediction error via plastic representations of time

COSYNE 2022

ePoster

Linking tonic dopamine and biased value predictions in a biologically inspired reinforcement learning model

COSYNE 2022

ePoster

Linking tonic dopamine and biased value predictions in a biologically inspired reinforcement learning model

COSYNE 2022

ePoster

Neurons in dlPFC signal unsigned reward prediction error independently from value

COSYNE 2022

ePoster

Neurons in dlPFC signal unsigned reward prediction error independently from value

COSYNE 2022

ePoster

State Prediction in Primary Olfactory Cortex

COSYNE 2022

ePoster

State Prediction in Primary Olfactory Cortex

COSYNE 2022

ePoster

Time uncertainty in threat prediction explains prefrontal norepinephrine release

Aakash Basu, Jen-Hau Yang, Abigail Yu, Samira Glaeser-Khan, Jiesi Feng, Yulong Li, Alfred Kaye

COSYNE 2023

ePoster

Theories of surprise: definitions and predictions

COSYNE 2022

ePoster

Theories of surprise: definitions and predictions

COSYNE 2022

ePoster

Uncertainty-weighted prediction errors (UPEs) in cortical microcircuits

COSYNE 2022

ePoster

Uncertainty-weighted prediction errors (UPEs) in cortical microcircuits

COSYNE 2022

ePoster

Behavioral and brainwide correlates of dynamic reward prediction

Anna Bowen, David Ottenheimer, Garret Stuber, Nicholas Steinmetz

COSYNE 2023

ePoster

Blazed oblique plane microscopy reveals scale-invariant predictions of brain-wide activity

Maximilian Hoffmann, Jörg Henninger, Johannes Veith, Lars Richter, Benjamin Judkewitz

COSYNE 2023

ePoster

Cerebellar involvement in supra-second time prediction

Ellen Boven, Jasmine Pickford, Joseph Pemberton, Nadia Cerminara, Rui Ponte Costa, Richard Apps

COSYNE 2023

ePoster

Controlling human cortical and striatal reinforcement learning with meta prediction error

Jae Hoon Shin, Jee Hang Lee, Sang Wan Lee

COSYNE 2023

ePoster

A cortical microcircuit for reinforcement prediction error

Quentin Chevy, Rui Ponte Costa, Zoltan Szadai, Rozsa Balazs, Adam Kepecs

COSYNE 2023

ePoster

Human Spinal Epidural Neuromodulation Modeling and Stimulation Effect Prediction

Hongda Li & Yanan Sui

COSYNE 2023

ePoster

Mechanisms of prediction in linear networks

Jared Salisbury & Stephanie Palmer

COSYNE 2023

ePoster

Multi-object memory and prediction in the primate brain

Nicholas Watters, John Gabel, Joshua Tenenbaum, Mehrdad Jazayeri

COSYNE 2023

ePoster

Pre-training artificial neural networks with spontaneous retinal activity improves image prediction

Lilly May, Alice Dauphin, Julijana Gjorgjieva

COSYNE 2023

ePoster

Recurrent circuits improve neural response prediction and provide insight into cortical circuits

Harold Rockwell, Sicheng Dai, Yimeng Zhang, Stephen Tsou, Ge Huang, Yuanyuan Wei, Tai Sing Lee

COSYNE 2023

ePoster

Reward prediction error neurons implement an efficient code for value

Wei Ji Ma, Dongjae Kim, Heiko Schuett

COSYNE 2023

ePoster

Sensorimotor prediction errors in the mouse olfactory cortex

Priyanka Gupta, Marie Dussauze, Uri Livneh, Dinu Albeanu

COSYNE 2023

ePoster

Sensory predictions are embedded in cortical motor activity

Jonathan A Michaels, Mehrdad Kashefi, Jack Zheng, Olivier Codol, Jeff Weiler, Andrew Pruszynski

COSYNE 2023

ePoster

Altered sensory prediction error signaling and dopamine function drive speech hallucinations in schizophrenia

Justin Buck, Mark Slifstein, Jodi Weinstein, Roberto Gil, Jared Van Snellenberg, Christoph Juchem, Anissa Abi-Dargham, Guillermo Horga

COSYNE 2025

ePoster

Biologically plausible coarse-grainings for prediction

Sylvia Durian, Olivier Marre, Stephanie Palmer

COSYNE 2025