TopicWorld Wide

inference

60 Seminars40 ePosters3 Positions

Pick a domain context

This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.

Position

Marco Miozzo

Centre Tecnològic de Telecomunicacions de Catalunya (CTTC)
Barcelona, Spain
Jan 4, 2026

We are offering a full-time post-doctorate position for investigating pervasive intelligence and Artificial Intelligence of Things (AIoT). The research will focus on solutions suitable for advancing towards a truly pervasive and liquid AI, enabling edge devices to accomplish training and inference with the same accuracy of cloud AI, without harming our environment. For doing so, highly efficient learning methods will be investigated including model compression, reservoir and neuromorphic computing, and distributed/decentralized and collaborative data/client selection algorithms. Realistic use cases from sustainable development goals will be considered to validate the selected solutions.

Position

I-Chun Lin, PhD

Gatsby Computational Neuroscience Unit, UCL
Gatsby Computational Neuroscience Unit, UCL
Jan 4, 2026

The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 29, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

Probing neural population dynamics with recurrent neural networks

Chethan Pandarinath
Emory University and Georgia Tech
Jun 12, 2024

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.

SeminarNeuroscience

Perception in Autism: Testing Recent Bayesian Inference Accounts

Amit Yashar
Haifa University
Apr 16, 2024
SeminarNeuroscience

Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
Apr 12, 2024

There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

SeminarNeuroscience

Visual mechanisms for flexible behavior

Marlene Cohen
University of Chicago
Jan 26, 2024

Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Reese Kneeland
Jan 5, 2024

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

SeminarNeuroscienceRecording

Multisensory perception, learning, and memory

Ladan Shams
UCLA
Dec 7, 2023

Note the later start time!

SeminarNeuroscienceRecording

Virtual Brain Twins for Brain Medicine and Epilepsy

Viktor Jirsa
Aix Marseille Université - Inserm
Nov 8, 2023

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

SeminarNeuroscienceRecording

Visual-vestibular cue comparison for perception of environmental stationarity

Paul MacNeilage
University of Nevada, Reno
Oct 26, 2023

Note the later time!

SeminarNeuroscienceRecording

Analogical inference in mathematics: from epistemology to the classroom (and back)

Dr Francesco Nappo & Dr Nicolò Cangiotti
Politecnico di Milano
Feb 23, 2023

In this presentation, we will discuss adaptations of historical examples of mathematical research to bring out some of the intuitive judgments that accompany the working practice of mathematicians when reasoning by analogy. The main epistemological claim that we will aim to illustrate is that a central part of mathematical training consists in developing a quasi-perceptual capacity to distinguish superficial from deep analogies. We think of this capacity as an instance of Hadamard’s (1954) discriminating faculty of the mathematical mind, whereby one is led to distinguish between mere “hookings” (77) and “relay-results” (80): on the one hand, suggestions or ‘hints’, useful to raise questions but not to back up conjectures; on the other, more significant discoveries, which can be used as an evidentiary source in further mathematical inquiry. In the second part of the presentation, we will present some recent applications of this epistemological framework to mathematics education projects for middle and high schools in Italy.

SeminarNeuroscience

Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

Jascha Achterberg
University of Cambridge
Feb 1, 2023

Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. To observe the effect of these processes, we introduce the spatially-embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a 3D Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs, similar to primate cerebral cortices, naturally converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically-efficient mixed-selective code. As all these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding forward.

SeminarPsychology

The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there

Valentina Borghesani
University of Geneva
Nov 30, 2022

Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 18, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscienceRecording

A biologically plausible inhibitory plasticity rule for world-model learning in SNNs

Z. Liao
Columbia
Nov 10, 2022

Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.

SeminarNeuroscience

Amortized inference in mind and brain

Sam Gershman
Harvard
Nov 9, 2022
SeminarNeuroscience

A predictive-processing account of psychosis

Philipp Sterzer
University of Basel, Switzerland
Nov 1, 2022

There has been increasing interest in the neurocomputational mechanisms underlying psychotic disorders in recent years. One promising approach is based on the theoretical framework of predictive processing, which proposes that inferences regarding the state of the world are made by combining prior beliefs with sensory signals. Delusions and hallucinations are the core symptoms of psychosis and often co-occur. Yet, different predictive-processing alterations have been proposed for these two symptom dimensions, according to which the relative weighting of prior beliefs in perceptual inference is decreased or increased, respectively. I will present recent behavioural, neuroimaging, and computational work that investigated perceptual decision-making under uncertainty and ambiguity to elucidate the changes in predictive processing that may give rise to psychotic experiences. Based on the empirical findings presented, I will provide a more nuanced predictive-processing account that suggests a common mechanism for delusions and hallucinations at low levels of the predictive-processing hierarchy, but still has the potential to reconcile apparently contradictory findings in the literature. This account may help to understand the heterogeneity of psychotic phenomenology and explain changes in symptomatology over time.

SeminarNeuroscience

Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex

Mohamady El-Gaby
University of Oxford
Sep 28, 2022

New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.

SeminarNeuroscienceRecording

The Secret Bayesian Life of Ring Attractor Networks

Anna Kutschireiter
Spiden AG, Pfäffikon, Switzerland
Sep 7, 2022

Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.

SeminarNeuroscienceRecording

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis
California Institute of Technology
Sep 7, 2022

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

SeminarNeuroscienceRecording

Canonical neural networks perform active inference

Takuya Isomura
RIKEN CBS
Jun 10, 2022

The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.

SeminarNeuroscienceRecording

Children’s inference of verb meanings: Inductive, analogical and abductive inference

Mutsumi Imai
Keio University
May 18, 2022

Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.

SeminarNeuroscienceRecording

Timescales of neural activity: their inference, control, and relevance

Anna Levina
Universität Tübingen
May 4, 2022

Timescales characterize how fast the observables change in time. In neuroscience, they can be estimated from the measured activity and can be used, for example, as a signature of the memory trace in the network. I will first discuss the inference of the timescales from the neuroscience data comprised of the short trials and introduce a new unbiased method. Then, I will apply the method to the data recorded from a local population of cortical neurons from the visual area V4. I will demonstrate that the ongoing spiking activity unfolds across at least two distinct timescales - fast and slow - and the slow timescale increases when monkeys attend to the location of the receptive field. Which models can give rise to such behavior? Random balanced networks are known for their fast timescales; thus, a change in the neurons or network properties is required to mimic the data. I will propose a set of models that can control effective timescales and demonstrate that only the model with strong recurrent interactions fits the neural data. Finally, I will discuss the timescales' relevance for behavior and cortical computations.

SeminarNeuroscienceRecording

Neuromodulation of inference and control in the cortical circuits

Kenji Doya
OIST
Mar 11, 2022
SeminarNeuroscienceRecording

Theory of recurrent neural networks – from parameter inference to intrinsic timescales in spiking networks

Alexander van Meegen
Forschungszentrum Jülich
Jan 13, 2022
SeminarNeuroscienceRecording

Does human perception rely on probabilistic message passing?

Alex Hyafil
CRM, Barcelona
Dec 22, 2021

The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.

SeminarNeuroscienceRecording

Deep Internal learning -- Deep Visual Inference without prior examples

Michal Irani
Weizmann Inst.
Dec 21, 2021
SeminarNeuroscienceRecording

NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain

Guangyao Qi
Institute of Neuroscience, Chinese Academy of Sciences
Dec 2, 2021

Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.

SeminarNeuroscienceRecording

NMC4 Keynote: Latent variable modeling of neural population dynamics - where do we go from here?

Chethan Pandarinath
Georgia Tech & Emory University
Dec 1, 2021

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present machine learning frameworks that enable inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales, from diverse brain areas, and without regard to behavior. I will then demonstrate extensions that allow recovery of dynamics from two-photon calcium imaging data with surprising precision. Finally, I will discuss our efforts to facilitate comparisons within our field by curating datasets and standardizing model evaluation, including a currently active modeling challenge, the 2021 Neural Latents Benchmark [neurallatents.github.io].

SeminarNeuroscienceRecording

Suboptimal human inference inverts the bias-variance trade-off for decisions with asymmetric evidence

Tahra Eissa
University of Colorado Boulder
Dec 1, 2021

Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.

SeminarNeuroscienceRecording

Generative models of brain function: Inference, networks, and mechanisms

Adeel Razi
Monash University
Nov 26, 2021

This talk will focus on the generative modelling of resting state time series or endogenous neuronal activity. I will survey developments in modelling distributed neuronal fluctuations – spectral dynamic causal modelling (DCM) for functional MRI – and how this modelling rests upon functional connectivity. The dynamics of brain connectivity has recently attracted a lot of attention among brain mappers. I will also show a novel method to identify dynamic effective connectivity using spectral DCM. Further, I will summarise the development of the next generation of DCMs towards large-scale, whole-brain schemes which are computationally inexpensive, to the other extreme of the development using more sophisticated and biophysically detailed generative models based on the canonical microcircuits.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermundstad
Janelia
Nov 19, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscienceRecording

Conflict in Multisensory Perception

Salvador Soto.Faraco
Universitat Pompeu Fabra
Nov 11, 2021

Multisensory perception is often studied through the effects of inter-sensory conflict, such as in the McGurk effect, the Ventriloquist illusion, and the Rubber Hand Illusion. Moreover, Bayesian approaches to cue fusion and causal inference overwhelmingly draw on cross-modal conflict to measure and to model multisensory perception. Given the prevalence of conflict, it is remarkable that accounts of multisensory perception have so far neglected the theory of conflict monitoring and cognitive control, established about twenty years ago. I hope to make a case for the role of conflict monitoring and resolution during multisensory perception. To this end, I will present EEG and fMRI data showing that cross-modal conflict in speech, resulting in either integration or segregation, triggers neural mechanisms of conflict detection and resolution. I will also present data supporting a role of these mechanisms during perceptual conflict in general, using Binocular Rivalry, surrealistic imagery, and cinema. Based on this preliminary evidence, I will argue that it is worth considering the potential role of conflict in multisensory perception and its incorporation in a causal inference framework. Finally, I will raise some potential problems associated with this proposal.

SeminarNeuroscienceRecording

Efficient GPU training of SNNs using approximate RTRL

James Knight
University of Sussex
Nov 3, 2021

Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].

SeminarNeuroscienceRecording

Machine Learning with SNNs for low-power inference on neuromorphic hardware

Dylan Muir
SynSense
Nov 3, 2021
SeminarNeuroscienceRecording

Beyond the binding problem: From basic affordances to symbolic thought

John E. Hummel
University of Illinois
Sep 30, 2021

Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.

SeminarNeuroscienceRecording

The role of the primate prefrontal cortex in inferring the state of the world and predicting change

Ramon Bartolo
Averbeck lab, Nation Institute of Mental Health
Sep 8, 2021

In an ever-changing environment, uncertainty is omnipresent. To deal with this, organisms have evolved mechanisms that allow them to take advantage of environmental regularities in order to make decisions robustly and adjust their behavior efficiently, thus maximizing their chances of survival. In this talk, I will present behavioral evidence that animals perform model-based state inference to predict environmental state changes and adjust their behavior rapidly, rather than slowly updating choice values. This model-based inference process can be described using Bayesian change-point models. Furthermore, I will show that neural populations in the prefrontal cortex accurately predict behavioral switches, and that the activity of these populations is associated with Bayesian estimates. In addition, we will see that learning leads to the emergence of a high-dimensional representational subspace that can be reused when the animals re-learn a previously learned set of action-value associations. Altogether, these findings highlight the role of the PFC in representing a belief about the current state of the world.

SeminarNeuroscience

From real problems to beast machines: the somatic basis of selfhood

Anil Seth
University of Sussex
Jun 30, 2021

At the foundation of human conscious experience lie basic embodied experiences of selfhood – experiences of simply ‘being alive’. In this talk, I will make the case that this central feature of human existence is underpinned by predictive regulation of the interior of the body, using the framework of predictive processing, or active inference. I start by showing how conscious experiences of the world around us can be understood in terms of perceptual predictions, drawing on examples from psychophysics and virtual reality. Then, turning the lens inwards, we will see how the experience of being an ‘embodied self’ rests on control-oriented predictive (allostatic) regulation of the body’s physiological condition. This approach implies a deep connection between mind and life, and provides a new way to understand the subjective nature of consciousness as emerging from systems that care intrinsically about their own existence. Contrary to the old doctrine of Descartes, we are conscious because we are beast machines.

SeminarPhysics of Life

Tutorial: inference in biological physics

Phil Nelson
University of Pennsylvania
Jun 25, 2021
SeminarNeuroscienceRecording

The neural dynamics of causal Inference across the cortical hierarchy

Uta Noppeney
Donders Institute for Brain, Cognition and Behaviour
May 27, 2021
SeminarOpen SourceRecording

Suite2p: a multipurpose functional segmentation pipeline for cellular imaging

Carsen Stringer
HHMI Janelia Research Campus
May 21, 2021

The combination of two-photon microscopy recordings and powerful calcium-dependent fluorescent sensors enables simultaneous recording of unprecedentedly large populations of neurons. While these sensors have matured over several generations of development, computational methods to process their fluorescence are often inefficient and the results hard to interpret. Here we introduce Suite2p: a fast, accurate, parameter-free and complete pipeline that registers raw movies, detects active and/or inactive cells (using Cellpose), extracts their calcium traces and infers their spike times. Suite2p runs faster than real time on standard workstations and outperforms state-of-the-art methods on newly developed ground-truth benchmarks for motion correction and cell detection.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermunstad
Janelia Research Campus
May 5, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscience

Abstraction and inference in the prefrontal hippocampal circuitry

Tim Behrens
Oxford University, UK
May 3, 2021
SeminarNeuroscience

Understanding "why": The role of causality in cognition

Tobias Gerstenberg
Stanford University
Apr 28, 2021

Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.

SeminarNeuroscienceRecording

Neural dynamics underlying temporal inference

Devika Narain
Erasmus Medical Centre
Apr 27, 2021

Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous and is inadvertently transformed as it passes through the nervous system. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. We, however, know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals, and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. Thereafter, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation, and production of timing behaviors.

SeminarNeuroscienceRecording

How multisensory perception is shaped by causal inference and serial effects

Christoph Kayser
Bielefeld University
Apr 22, 2021
SeminarNeuroscienceRecording

Learning in pain: probabilistic inference and (mal)adaptive control

Flavia Mancini
Department of Engineering
Apr 20, 2021

Pain is a major clinical problem affecting 1 in 5 people in the world. There are unresolved questions that urgently require answers to treat pain effectively, a crucial one being how the feeling of pain arises from brain activity. Computational models of pain consider how the brain processes noxious information and allow mapping neural circuits and networks to cognition and behaviour. To date, they have generally have assumed two largely independent processes: perceptual and/or predictive inference, typically modelled as an approximate Bayesian process, and action control, typically modelled as a reinforcement learning process. However, inference and control are intertwined in complex ways, challenging the clarity of this distinction. I will discuss how they may comprise a parallel hierarchical architecture that combines pain inference, information-seeking, and adaptive value-based control. Finally, I will discuss whether and how these learning processes might contribute to chronic pain.

SeminarNeuroscienceRecording

Structure-mapping in Human Learning

Dedre Gentner
Northwestern University
Apr 2, 2021

Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.

SeminarNeuroscience

Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models

Alexander Weigard
University of Michigan
Mar 30, 2021

The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.

SeminarNeuroscienceRecording

Inferring brain-wide interactions using data-constrained recurrent neural network models

Matthew Perich
Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 24, 2021

Behavior arises from the coordinated activity of numerous distinct brain regions. Modern experimental tools allow access to neural populations brain-wide, yet understanding such large-scale datasets necessitates scalable computational models to extract meaningful features of inter-region communication. In this talk, I will introduce Current-Based Decomposition (CURBD), an approach for inferring multi-region interactions using data-constrained recurrent neural network models. I will first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. I will then apply CURBD to understand the brain-wide flow of information leading to behavioral state transitions in larval zebrafish. These examples will establish CURBD as a flexible, scalable framework to infer brain-wide interactions that are inaccessible from experimental measurements alone.

SeminarNeuroscienceRecording

Hebbian learning, its inference, and brain oscillation

Sukbin Lim
NYU Shanghai
Mar 24, 2021

Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.

ePoster

Artifact identification in transfer entropy connectivity inference of neuronal cultures

Mikel Ocio-Moliner, Angelo Piga, Jordi Soriano

FENS Forum 2024

ePoster

Bayesian inference and arousal modulation in spatial perception to mitigate stochasticity and volatility

David Meijer, Fabian Dorok, Roberto Barumerli, Burcu Bayram, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

Bernstein Conference 2024

ePoster

Parameter specification in spiking neural networks using simulation-based inference

Daniel Todt, Sandra Diaz, Abigail Morrison

Bernstein Conference 2024

ePoster

The role of feedback in dynamic inference for spatial navigation under uncertainty

Albert Chen, Jan Drugowitsch

Bernstein Conference 2024

ePoster

Super-Oscillators: Simulation-based inference for estimating alpha rhythm model parameters for high SNR recordings

Natalie Schaworonkow, Richard Gao

Bernstein Conference 2024

ePoster

Auxiliary neurons in optimized recurrent neural circuit speed up sampling-based probabilistic inference

Wah Ming Wayne Soo,Máté Lengyel

COSYNE 2022

ePoster

Bayesian Inference in High-Dimensional Time-Series with the Orthogonal Stochastic Linear Mixing Model

Rui Meng,Kristofer Bouchard

COSYNE 2022

ePoster

Causal inference can explain hierarchical motion perception and is reflected in neural responses in MT

Sabyasachi Shivkumar,Zhexin Xu,Gábor Lengyel,Gregory DeAngelis,Ralf Haefner

COSYNE 2022

ePoster

Efficient inference of synaptic learning rule with Conditional Gaussian Method

Shirui Chen,Sukbin Lim,Qixin Yang

COSYNE 2022

ePoster

A GABAergic plasticity mechanism for world structure inference by CA3

Zhenrui Liao,Darian Hadjiabadi,Satoshi Terada,Ivan Soltesz,Attila Losonczy

COSYNE 2022

ePoster

A GABAergic plasticity mechanism for world structure inference by CA3

Zhenrui Liao,Darian Hadjiabadi,Satoshi Terada,Ivan Soltesz,Attila Losonczy

COSYNE 2022

ePoster

Inference of the time-varying relationship between spike trains and a latent decision variable

Thomas Luo,Brian DePasquale,Carlos D. Brody,Timothy Kim

COSYNE 2022

ePoster

Inference of the time-varying relationship between spike trains and a latent decision variable

Thomas Luo,Brian DePasquale,Carlos D. Brody,Timothy Kim

COSYNE 2022

ePoster

Occam’s razor guides intuitive human inference

Eugenio Piasini,Shuze Liu,Vijay Balasubramanian,Joshua Gold

COSYNE 2022

ePoster

A neural circuit model of hidden state inference for navigation and contextual memory

Isabel Low,Scott Linderman,Lisa Giocomo,Alex Williams

COSYNE 2022

ePoster

A neural circuit model of hidden state inference for navigation and contextual memory

Isabel Low,Scott Linderman,Lisa Giocomo,Alex Williams

COSYNE 2022

ePoster

The neural code controls the geometry of probabilistic inference in early olfactory processing

Paul Masset,Jacob Zavatone-Veth,Venkatesh N. Murthy,Cengiz Pehlevan

COSYNE 2022

ePoster

The neural code controls the geometry of probabilistic inference in early olfactory processing

Paul Masset,Jacob Zavatone-Veth,Venkatesh N. Murthy,Cengiz Pehlevan

COSYNE 2022

ePoster

Occam’s razor guides intuitive human inference

Eugenio Piasini,Shuze Liu,Vijay Balasubramanian,Joshua Gold

COSYNE 2022

ePoster

Optimists and realists: heterogeneous priors in rats performing hidden state inference

Andrew Mah,Christine Constantinople

COSYNE 2022

ePoster

Optimists and realists: heterogeneous priors in rats performing hidden state inference

Andrew Mah,Christine Constantinople

COSYNE 2022

ePoster

Sensory specific modulation of neural variability facilitates perceptual inference

Hyeyoung Shin,Hillel Adesnik

COSYNE 2022

ePoster

Sensory specific modulation of neural variability facilitates perceptual inference

Hyeyoung Shin,Hillel Adesnik

COSYNE 2022

ePoster

Structure in motion: visual motion perception as online hierarchical inference

Johannes Bill,Samuel J. Gershman,Jan Drugowitsch

COSYNE 2022

ePoster

Structure in motion: visual motion perception as online hierarchical inference

Johannes Bill,Samuel J. Gershman,Jan Drugowitsch

COSYNE 2022

ePoster

Unsupervised inference of brain-wide functional motifs underlying behavioral state transitions

Matthew Perich,Tyler Benster,Aaron Andalman,Daphne Cornelisse,Eugene Carter,Karl Deisseroth,Kanaka Rajan

COSYNE 2022

ePoster

Unsupervised inference of brain-wide functional motifs underlying behavioral state transitions

Matthew Perich,Tyler Benster,Aaron Andalman,Daphne Cornelisse,Eugene Carter,Karl Deisseroth,Kanaka Rajan

COSYNE 2022

ePoster

Alternating inference and learning: a thalamocortical model for continual and transfer learning

Ali Hummos & Guangyu Robert Yang

COSYNE 2023

ePoster

Approximate inference through active computation accounts for human categorization behavior

Xiang Li, Luigi Acerbi, Wei Ji Ma

COSYNE 2023

ePoster

Brain-Rhythm-based Inference (BRyBI) for time-scale invariant speech processing

Olesia Dogonasheva, Denis Zakharov, Anne-Lise Giraud, Boris Gutkin

COSYNE 2023

ePoster

Brain wide distribution of prior belief constrains neural models of probabilistic inference

Felix Hubert, Charles Findling, Berk Gerçek, Brandon Benson, Matthew Whiteway, Christopher Krasniak, Anthony Zador, The International Brain Lab The International Brain Lab, Peter Dayan, Alexandre Pouget

COSYNE 2023

ePoster

A causal inference model of spike train interactions in fast response regimes

Zachary Saccomano & Asohan Amarasingham

COSYNE 2023

ePoster

Divisive normalization as a mechanism for hierarchical causal inference in motion perception

Boris Penaloza, Sabyasachi Shivkumar, Gabor Lengyel, Linghao Xu, Gregory DeAngelis, Ralf Haefner

COSYNE 2023

ePoster

Mice alternate between inference- and stimulus-bound strategies during probabilistic foraging

Daniel Burnham, Zachary Mainen, Fanny Cazettes, Luca Mazzucato

COSYNE 2023

ePoster

Network dynamics implement optimal inference in a flexible timing task

John Schwarcz, Eran Lottem, Jonathan Kadmon

COSYNE 2023

ePoster

Statistical learning yields generalization and naturalistic behaviors in transitive inference

Samuel Lippl, Larry Abbott, Kenneth Kay, Greg Jensen, Vincent Ferrera

COSYNE 2023

ePoster

Violations of transitivity disrupt relational inference in humans and reinforcement learning models

Thomas Graham & Bernhard Spitzer

COSYNE 2023

ePoster

Bayesian causal inference predicts center-surround interactions in MT

Gabor Lengyel, Sabyasachi Shivkumar, Gregory DeAngelis, Ralf Haefner

COSYNE 2025

ePoster

Compositional inference in the continual learning mouse playground

Aneesh Bal, Andrea Santi, Cecelia Shuai, Samantha Soto, Joshua Vogelstein, Patricia Janak, Kishore V. Kuchibhotla

COSYNE 2025

ePoster

Contextual inference accounts for differences in motor learning under distinct curricula

Sabyasachi Shivkumar, James Ingram, Mate Lengyel, Daniel Wolpert

COSYNE 2025