Perceptual
perceptual representations
Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing
Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.
Understanding Perceptual Priors with Massive Online Experiments
One of the most important questions in psychology and neuroscience is understanding how the outside world maps to internal representations. Classical psychophysics approaches to this problem have a number of limitations: they mostly study low dimensional perpetual spaces, and are constrained in the number and diversity of participants and experiments. As ecologically valid perception is rich, high dimensional, contextual, and culturally dependent, these impediments severely bias our understanding of perceptual representations. Recent technological advances—the emergence of so-called “Virtual Labs”— can significantly contribute toward overcoming these barriers. Here I present a number of specific strategies that my group has developed in order to probe representations across a number of dimensions. 1) Massive online experiments can increase significantly the amount of participants and experiments that can be carried out in a single study, while also significantly diversifying the participant pool. We have developed a platform, PsyNet, that enables “experiments as code,” whereby the orchestration of computer servers, recruiting, compensation of participants, and data management is fully automated and every experiment can be fully replicated with one command line. I will demonstrate how PsyNet allows us to recruit thousands of participants for each study with a large number of control experimental conditions, significantly increasing our understanding of auditory perception. 2) Virtual lab methods also enable us to run experiments that are nearly impossible in a traditional lab setting. I will demonstrate our development of adaptive sampling, a set of behavioural methods that combine machine learning sampling techniques (Monte Carlo Markov Chains) with human interactions and allow us to create high-dimensional maps of perceptual representations with unprecedented resolution. 3) Finally, I will demonstrate how the aforementioned methods can be applied to the study of perceptual priors in both audition and vision, with a focus on our work in cross-cultural research, which studies how perceptual priors are influenced by experience and culture in diverse samples of participants from around the world.