ePoster

Time-yoked integration throughout human auditory cortex

Samuel Norman-Haignereand 7 co-authors

Presenting Author

Conference
COSYNE 2025 (2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Samuel Norman-Haignere, Menoua Keshishian, Orrin Devinsky, Werner Doyle, Guy McKhann, Catherine Schevon, Adeen Flinker, Nima Mesgarani

Abstract

The sound structures that convey meaning in speech such as phonemes and words vary widely in their duration. As a consequence, integrating across absolute time (e.g., 100 ms) and sound structure (e.g., phonemes) reflect fundamentally distinct neural computations. Auditory and cognitive models have often cast neural integration in terms of time and structure, respectively, but whether neural computations in the auditory cortex reflect time or structure remains unknown. To answer this question, we rescaled the duration of all speech structures using time stretching/compression and measured integration windows using a new paradigm, effective in nonlinear systems. Our approach revealed a clear transition from time- to structure-yoked computation across the layers of a popular deep neural network model trained to recognize structure from natural speech. When applied to spatiotemporally precise intracranial recordings from the human auditory cortex, we observed significantly longer integration windows for stretched vs. compressed speech, but this lengthening was very small (~5\%) relative to the change in structure durations, even in non-primary regions strongly implicated in speech-specific processing. These findings demonstrate that time-yoked computations dominate throughout the human auditory cortex, placing strong constraints on neurocomputational models of structure processing.

Unique ID: cosyne-25/time-yoked-integration-throughout-fd28111f