Resources
Authors & Affiliations
David Meijer, Roberto Barumerli, Robert Baumgartner
Abstract
A key challenge in perception is to differentiate random noisy observations from structurally wrong beliefs when confronted with prediction errors. One should integrate out noise to improve precision, but simultaneously recognize when predictions are irrelevant due to environmental change. Bayesian inference prescribes a statistically optimal solution to this problem, but it is memory intensive and computationally complex, thus unlikely to be used continuously by humans.Here, we systematically investigated which hallmarks of Bayesian inference are present in previously published human response data from an audiovisual spatial prediction task with noisy sequences and occasional changepoints (Krishnamurthy, Nassar, Sarode, Gold, 2017, Nat Hum Behav). Importantly, participants were unexpectedly prompted to make their prediction at the end of a longer sequence, thus necessitating online updating of beliefs. Prediction responses were biased towards the mean of preceding stimuli, yet the last stimulus was consistently overweighted. In line with Bayesian inference, biases increased with prior reliability and decreased gradually with larger prediction errors. Models with deterministic selection of changepoints failed to qualitatively describe the data; iteratively incorporating causal uncertainty into the priors was crucial. Model comparison favoured a simplified Bayesian model with a single Gaussian prior node over more complex models with larger memory capacity.We conclude that continuous perceptual belief updating in stochastic and volatile environments is best described by near-Bayesian inference with limited memory resources. The smaller than ideal biases towards the priors can be attributed to additional out-of-model uncertainty that led participants to reduce their a-priori belief about prior relevance.