Resources
Authors & Affiliations
Guoyang Liao, Samuel Norman-Haignere, Dana Boebinger, Christopher Garcia, Kirill Nourski, Matthew Howard, Thomas Wychowski, Webster Pilcher
Abstract
Neural responses throughout the auditory pathway show tuning for modulations in a time-frequency representation of sound, but how these spectrotemporal modulations are encoded in the human auditory cortex remains poorly understood. Classical, linear spectrotemporal receptive field (STRF) models are simple to understand and fit but have limited predictive power, particularly for complex natural sounds in non-primary regions of the human auditory cortex. We measured responses to a diverse set of natural sounds using spatiotemporally precise intracranial recordings from human neurosurgical patients. We then attempted to predict the response of each electrode using a linear STRF or a two-stage model that first computed the spectrotemporal envelope from a bank of STRFs, and then linearly mapped these envelopes to the neural response. We find that the two-stage envelope model nearly doubles the predictive power of STRF models in non-primary auditory cortex. These findings reveal how spectrotemporal modulations are represented in the human auditory cortex and demonstrate how to substantially enhance the predictive power of a workhorse auditory model.