Psychophysical Experiments
psychophysical experiments
Bei Xiao
The RA is to pursue research projects of his/her own as well as provide support for research carried out in the Xiao lab. Possible duties include: Building VR/AR experimental interfaces with Unity3D, Python coding for behavioral data analysis, Collecting data for psychophysical experiments, Training machine learning models.
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
From natural scene statistics to multisensory integration: experiments, models and applications
To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.
The self-consistent nature of visual perception
Vision provides us with a holistic interpretation of the world that is, with very few exceptions, coherent and consistent across multiple levels of abstraction, from scene to objects to features. In this talk I will present results from past and ongoing work in my laboratory that investigates the role top-down signals play in establishing such coherent perceptual experience. Based on the results of several psychophysical experiments I will introduce a theory of “self-consistent inference” and show how it can account for human perceptual behavior. The talk will close with a discussion of how the theory can help us understand more cognitive processes.
The attentional requirement of unconscious processing
The tight relationship between attention and conscious perception has been extensively researched in the past decades. However, whether attentional modulation extended to unconscious processes remained largely unknown, particularly when it came to abstract and high-level processing. I will talk about a recent study where we utilized the Stroop paradigm to show that task load gates unconscious semantic processing. In a series of psychophysical experiments, the unconscious word semantics influenced conscious task performance only under the low task load condition, but not the high task load condition. Intriguingly, with enough practice in the high task load condition, the unconscious effect reemerged. These findings suggest a competition of attentional resources between unconscious and conscious processes, challenging the automaticity account of unconscious processing.
Computational psychophysics at the intersection of theory, data and models
Behavioural measurements are often overlooked by computational neuroscientists, who prefer to focus on electrophysiological recordings or neuroimaging data. This attitude is largely due to perceived lack of depth/richness in relation to behavioural datasets. I will show how contemporary psychophysics can deliver extremely rich and highly constraining datasets that naturally interface with computational modelling. More specifically, I will demonstrate how psychophysics can be used to guide/constrain/refine computational models, and how models can be exploited to design/motivate/interpret psychophysical experiments. Examples will span a wide range of topics (from feature detection to natural scene understanding) and methodologies (from cascade models to deep learning architectures).