Sound Perception
sound perception
Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise
Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.
The neural mechanisms for song evaluation in fruit flies
How does the brain decode the meaning of sound signals, such as music and courtship songs? We believe that the fruit fly Drosophila melanogaster is an ideal model for answering this question, as it offers a comprehensive range of tools and assays which allow us to dissect the mechanisms underlying sound perception and evaluation in the brain. During the courtship behavior, male fruit flies emit “courtship songs” by vibrating their wings. Interestingly, the fly song has a species-specific rhythm, which indeed increases the female’s receptivity for copulation as well as male’s courtship behavior itself. How song signals, especially the species-specific sound rhythm, are evaluated in the fly brain? To tackle this question, we are exploring the features of the fly auditory system systematically. In this lecture, I will talk about our recent findings on the neural basis for song evaluation in fruit flies.
fMRI mapping of brain circuits during simple sound perception by awake rats
FENS Forum 2024