← Back

Bayesian Inference

Topic spotlight
TopicWorld Wide

bayesian inference

Discover seminars, jobs, and research tagged with bayesian inference across World Wide.
31 curated items13 Positions11 Seminars7 ePosters
Updated about 18 hours ago
31 items · bayesian inference
31 results
Position

Dr Flavia Mancini

Computational and Biological Learning, Department of Engineering, University of Cambridge
Cambridge, UK
Dec 5, 2025

This is an opportunity for a highly creative and skilled pre-doctoral Research Assistant to join the dynamic and multidisciplinary research environment of the Computational and Biological Learning research group (https://www.cbl-cambridge.org/), Department of Engineering, University of Cambridge. We are looking for a Research Assistant to work on projects related to statistical learning and contextual inference in the human brain. We have a particular focus of learning of aversive states, as this has a strong clinical significance for chronic pain and mental health disorders. The RA will be supervised by Dr Flavia Mancini (MRC Career Development fellow, and Head of the Nox Lab www.noxlab.org), and is expected to collaborate with theoretical and experimental colleagues in Cambridge, Oxford and abroad. The post holder will be located in central Cambridge, Cambridgeshire, UK. As a general approach, we combine statistical learning tasks in humans, computational modelling (using Bayesian inference, reinforcement learning, deep learning and neural networks) with neuroimaging methods (including 7T fMRI). The successful candidate will strengthen this approach and be responsible for designing experiments, collecting and analysis behavioural and brain fMRI data using computational modelling techniques. The key responsibilities and duties are: Ideating and conducting research studies on statistical/aversive learning, combining behavioural tasks, computational modelling (using Bayesian inference, reinforcement learning, deep learning and/or neural networks) with fMRI in healthy volunteers and chronic pain patients. Disseminating research findings Maintaining and developing technical skills to expand their scientific potential ******* More info and to apply: https://www.jobs.cam.ac.uk/job/35905/

PositionComputational Neuroscience

Prof. Wenhao Zhang

UT Southwestern Medical Center
Dallas Texas, USA
Dec 5, 2025

The Computational Neuroscience lab directed by Dr. Wenhao Zhang at the University of Texas Southwestern Medical Center (www.zhang-cnl.org) is currently seeking up to two postdoctoral fellows to study cutting edge problems in computational neuroscience. Research topics include: 1). The neural circuit implementation of normative computation, e.g., Bayesian (causal) inference. 2). Dynamical analysis of recurrent neural circuit models. 3). Modern deep learning methods to solve neuroscience problems. Successful candidates are expected to play an active and independent role in one of our research topics. All projects are strongly encouraged to collaborate with experimental neuroscientists both in UT Southwestern as well as abroad. The initial appointment is for one year with the expectation of extension given satisfactory performance. UT Southwestern provides competitive salary and benefits packages.

PositionComputational Neuroscience

Prof Wenhao Zhang

UT Southwestern Medical Center
Dallas Texas, USA
Dec 5, 2025

The Computational Neuroscience lab directed by Dr. Wenhao Zhang at the University of Texas Southwestern Medical Center (www.zhang-cnl.org) is currently seeking up to two postdoctoral fellows to study cutting edge problems in computational neuroscience. Research topics include: 1). The neural circuit implementation of normative computation, e.g., Bayesian (causal) inference. 2). Dynamical analysis of recurrent neural circuit models. 3). Modern deep learning methods to solve neuroscience problems. Successful candidates are expected to play an active and independent role in one of our research topics. All projects are strongly encouraged to collaborate with experimental neuroscientists both in UT Southwestern as well as abroad. The initial appointment is for one year with the expectation of extension given satisfactory performance. UT Southwestern provides competitive salary and benefits packages.

Position

Prof Iain Couzin

University of Konstanz
Konstanz, Germany
Dec 5, 2025

Despite the fact that social transmission of information is vital to many group-living animals, the organizing principles governing the networks of interaction that give rise to collective properties of animal groups, remain poorly understood. The student will employ an integrated empirical and theoretical approach to investigate the relationship between individual computation (cognition at the level of the ‘nodes’ within the social network) and collective computation (computation arising from the structure of the social network). The challenge for individuals in groups is to be both robust to noise, and yet sensitive to meaningful (often small) changes in the physical or social environment, such as when a predator is present. There exist two, non mutually-exclusive, hypotheses for how individuals in groups could modulate the degree to which sensory input to the network is amplified; 1) it could be that individuals adjust internal state variable(s) (e.g. response threshold(s)), effectively adjusting the sensitivity of the “nodes” within the network to sensory input and/or 2) it could be that individuals change their spatial relationships with neighbors (such as by modulating density) such that it is changes in the structure and strength of connections in the network that modulates the information transfer capabilities, and thus collective responsiveness, of groups. Using schooling fish as a model system we will investigate these hypotheses under a range of highly controlled, ecologically-relevant scenarios that vary in terms of timescale and type of response, including during predator avoidance as well as the search for, and exploitation of, resources. We will employ technologies such as Bayesian inference and unsupervised learning techniques developed in computational neuroscience and machine learning to identify, reconstruct, and analyze the directed and time-varying sensory networks within groups, and to relate these to the functional networks of social influence. As in neuroscience, we care about stimulus-dependent, history-dependent discrete stochastic events, including burstiness, refractoriness and habituation and throughout we will seek to isolate principles that extend beyond the specificities of our system. For more information see: https://www.smartnets-etn.eu/collective-computation-in-large-animal-groups/

Position

Luigi Acerbi

Department of Computer Science, University of Helsinki
Department of Computer Science, University of Helsinki
Dec 5, 2025

The main goal of the project is to extend and improve on our VBMC framework for efficient probabilistic inference with moderately-to-very expensive models, published in multiple papers, available in MATLAB and recently released for Python. We aim to perform Bayesian inference for parameters of complex, expensive state-of-the-art models in fields such as cognitive science and AI. An example is the AI-inspired model of human gameplay from Wei Ji Ma's group (van Opheusden et al., Nature 2023). The project includes funding for research visits to international collaborators such as Wei Ji Ma at New York University and Michael Osborne at the University of Oxford. We also have many local collaborators, such as Antti Honkela for applications of sample-efficient inference to privacy, and our team is highly involved in the thriving & highly collaborative community of probabilistic ML/AI researchers — PhDs, postdocs, PIs — in the Finnish Center for Artificial Intelligence FCAI, on top of many ongoing national and international collaborations in cognitive science and computational neuroscience.

Position

Samuel Kaski

University of Manchester and Aalto University
Manchester, UK
Dec 5, 2025

The University of Manchester is making a strategic investment in fundamentals of AI, to complement its existing strengths in AI applications across several prominent research fields in the University, which give high-profile application and collaboration opportunities for the outcomes of fundamental AI research. The university is one of the most active partners of the national Alan Turing Institute, hosts 33 Turing Fellows and Fellows of the European Laboratory of Learning and Intelligent Systems ELLIS, in the new ELLIS Unit Manchester. The university’s ambition is to establish a leading AI centre at the cross section of these opportunities. The university has recently launched a Centre for AI Fundamentals and has already recruited four new academics to it. These two lectureships continue this series of positions in establishing the new Centre.

Position

N/A

University of Neuchatel
Neuchatel, Switzerland
Dec 5, 2025

This project is about developing reinforcement-learning based AI systems that directly interact with some segment of society. The applications include matching and other allocation problems. The research will be performed at the interface between reinforcement learning, social choice theory, Bayesian inference, mechanism design, differential privacy and algorithmic fairness. The research will have both a theoretical and practical component, which will include some experiments with humans. However, a good theoretical background in probability, machine learning or game theory is necessary for all students. The positions are available from January 2024. The PhD lasts for 4 years and includes a small teaching component.

Position

Silvia Lopez-Guzman

National Institute of Mental Health
National Institute of Mental Health
Dec 5, 2025

The Unit on Computational Decision Neuroscience (CDN) at the National Institute of Mental Health is seeking a full-time Data Scientist/Data Analyst. The lab is focused on understanding the neural and computational bases of adaptive and maladaptive decision-making and their relationship to mental health. Current studies investigate how internal states lead to biases in decision-making and how this is exacerbated in mental health disorders. Our approach involves a combination of computational model-based tasks, questionnaires, biosensor data, fMRI, and intracranial recordings. The main models of interest come from neuroeconomics, reinforcement learning, Bayesian inference, signal detection, and information theory. The main tasks for this position include computational modeling of behavioral data from decision-making and other cognitive tasks, statistical analysis of task-based, clinical, physiological and neuroimaging data, as well as data visualization for scientific presentations, public communication, and academic manuscripts. The candidate is expected to demonstrate experience with best practices for the development of well-documented, reproducible programming pipelines for data analysis, that facilitate sharing and collaboration, and live up to our open-science philosophy, as well as to our data management and sharing commitments at NIH.

Position

Kenji Doya

Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University
1919-1 Tancha, Onna, Okinawa 904-0495, Japan
Dec 5, 2025

Multiple open research positions at the Neural Computation Unit at OIST, including: 1. Theory and experimental investigation of Bayesian inference and reinforcement learning by the cortex, basal ganglia, and neuromodulator systems. 2. Data-driven construction of neural network models and large-scale simulation. 3. Application of wearable devices for monitoring mind and body state to support healthy life. 4. Flexible and robust reinforcement learning and meta-learning algorithms. 5. Development of smartphone robots for multi-agent learning and embodied evolution.

PositionMachine Learning

Samuel Kaski

Aalto University and University of Manchester
Helsinki, Finland and Manchester, UK
Dec 5, 2025

Thinking about the next position for your research career? I am hiring postdocs in my machine learning research group both in Helsinki, Finland and Manchester, UK. We develop new machine learning methods and study machine learning principles. Keywords include: probabilistic modelling, Bayesian inference, simulation-based inference, multi-agent RL and collaborative AI, sequential decision making and experimental design, active learning, human-in-the-loop learning and user modelling, privacy-preserving learning, Bayesian deep learning, generative models. We also solve problems of other fields with the methods – and use those problems as test benches when developing the methods. We have excellent collaborators in drug design, synthetic biology and biodesign, personalized medicine, cognitive science and human-computer interaction.

PositionMachine Learning

Samuel Kaski

Aalto University and University of Manchester
Helsinki, Finland and Manchester, UK
Dec 5, 2025

Thinking about the next position for your research career? I am hiring postdocs in my machine learning research group both in Helsinki, Finland and Manchester, UK. We develop new machine learning methods and study machine learning principles. Keywords include: probabilistic modelling, Bayesian inference, simulation-based inference, multi-agent RL and collaborative AI, sequential decision making and experimental design, active learning, human-in-the-loop learning and user modelling, privacy-preserving learning, Bayesian deep learning, generative models. We also solve problems of other fields with the methods – and use those problems as test benches when developing the methods. We have excellent collaborators in drug design, synthetic biology and biodesign, personalized medicine, cognitive science and human-computer interaction.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 28, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

Perception in Autism: Testing Recent Bayesian Inference Accounts

Amit Yashar
Haifa University
Apr 15, 2024
SeminarNeuroscienceRecording

Virtual Brain Twins for Brain Medicine and Epilepsy

Viktor Jirsa
Aix Marseille Université - Inserm
Nov 7, 2023

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

SeminarNeuroscienceRecording

The Secret Bayesian Life of Ring Attractor Networks

Anna Kutschireiter
Spiden AG, Pfäffikon, Switzerland
Sep 6, 2022

Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.

SeminarNeuroscienceRecording

Canonical neural networks perform active inference

Takuya Isomura
RIKEN CBS
Jun 9, 2022

The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermundstad
Janelia
Nov 18, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermunstad
Janelia Research Campus
May 4, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscienceRecording

Neural dynamics underlying temporal inference

Devika Narain
Erasmus Medical Centre
Apr 26, 2021

Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous and is inadvertently transformed as it passes through the nervous system. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. We, however, know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals, and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. Thereafter, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation, and production of timing behaviors.

SeminarNeuroscienceRecording

Learning in pain: probabilistic inference and (mal)adaptive control

Flavia Mancini
Department of Engineering
Apr 19, 2021

Pain is a major clinical problem affecting 1 in 5 people in the world. There are unresolved questions that urgently require answers to treat pain effectively, a crucial one being how the feeling of pain arises from brain activity. Computational models of pain consider how the brain processes noxious information and allow mapping neural circuits and networks to cognition and behaviour. To date, they have generally have assumed two largely independent processes: perceptual and/or predictive inference, typically modelled as an approximate Bayesian process, and action control, typically modelled as a reinforcement learning process. However, inference and control are intertwined in complex ways, challenging the clarity of this distinction. I will discuss how they may comprise a parallel hierarchical architecture that combines pain inference, information-seeking, and adaptive value-based control. Finally, I will discuss whether and how these learning processes might contribute to chronic pain.

SeminarNeuroscience

Top-down Modulation in Human Visual Cortex

Mohamed Abdelhack
Washington University in St. Louis
Dec 16, 2020

Human vision flaunts a remarkable ability to recognize objects in the surrounding environment even in the absence of complete visual representation of these objects. This process is done almost intuitively and it was not until scientists had to tackle this problem in computer vision that they noticed its complexity. While current advances in artificial vision systems have made great strides exceeding human level in normal vision tasks, it has yet to achieve a similar robustness level. One cause of this robustness is the extensive connectivity that is not limited to a feedforward hierarchical pathway similar to the current state-of-the-art deep convolutional neural networks but also comprises recurrent and top-down connections. They allow the human brain to enhance the neural representations of degraded images in concordance with meaningful representations stored in memory. The mechanisms by which these different pathways interact are still not understood. In this seminar, studies concerning the effect of recurrent and top-down modulation on the neural representations resulting from viewing blurred images will be presented. Those studies attempted to uncover the role of recurrent and top-down connections in human vision. The results presented challenge the notion of predictive coding as a mechanism for top-down modulation of visual information during natural vision. They show that neural representation enhancement (sharpening) appears to be a more dominant process of different levels of visual hierarchy. They also show that inference in visual recognition is achieved through a Bayesian process between incoming visual information and priors from deeper processing regions in the brain.

SeminarNeuroscienceRecording

Inferring Brain Rhythm Circuitry and Burstiness

Andre Longtin
University of Ottawa
Apr 14, 2020

Bursts in gamma and other frequency ranges are thought to contribute to the efficiency of working memory or communication tasks. Abnormalities in bursts have also been associated with motor and psychiatric disorders. The determinants of burst generation are not known, specifically how single cell and connectivity parameters influence burst statistics and the corresponding brain states. We first present a generic mathematical model for burst generation in an excitatory-inhibitory (EI) network with self-couplings. The resulting equations for the stochastic phase and envelope of the rhythm’s fluctuations are shown to depend on only two meta-parameters that combine all the network parameters. They allow us to identify different regimes of amplitude excursions, and to highlight the supportive role that network finite-size effects and noisy inputs to the EI network can have. We discuss how burst attributes, such as their durations and peak frequency content, depend on the network parameters. In practice, the problem above follows the a priori challenge of fitting such E-I spiking networks to single neuron or population data. Thus, the second part of the talk will discuss a novel method to fit mesoscale dynamics using single neuron data along with a low-dimensional, and hence statistically tractable, single neuron model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous ‘pools’ of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using an E-I network of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived. We show that both single-neuron and connectivity parameters can be adequately recovered from simulated data.

ePoster

Bayesian inference and arousal modulation in spatial perception to mitigate stochasticity and volatility

David Meijer, Fabian Dorok, Roberto Barumerli, Burcu Bayram, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

Bernstein Conference 2024

ePoster

Bayesian Inference in High-Dimensional Time-Series with the Orthogonal Stochastic Linear Mixing Model

COSYNE 2022

ePoster

Bayesian inference of cortico-cortical effective connectivity in networks of neural mass models

Matthieu Gilson, Cyprien Dautrevaux, Olivier David, Meysam Hashemi

FENS Forum 2024

ePoster

Bayesian inference during implicit perceptual belief updating in dynamic auditory perception

David Meijer, Fabian Dorok, Roberto Barumerli, Burcu Bayram, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

FENS Forum 2024

ePoster

Bayesian inference on virtual brain models of disorders

Meysam Hashemi, Marmaduke Woodman, Viktor Jirsa

FENS Forum 2024

ePoster

EEG correlates of Bayesian inference in auditory spatial localization in changing environments

Burcu Bayram, David Meijer, Roberto Barumerli, Michelle Spierings, Robert Baumgartner, Ulrich Pomper

FENS Forum 2024

ePoster

EEG patterns reflecting Bayesian inference during auditory temporal discrimination

Ulrich Pomper, Burcu Bayram, Valentin Pellegrini, David Meijer, Michelle Spierings, Robert Baumgartner

FENS Forum 2024