Resources
Authors & Affiliations
Sophie Bagur,Jacques Bourg,Alexandre Kempf,Thibault Tarpin,Khalil Bergaoui,Yin Guo,Etienne Gosselin,Alain Muller,Jean Luc Puel,Jérôme Bourien,Brice Bathellier
Abstract
The computational principles driving the transformation of sound information throughout the auditory system remains an intense subject of research. Here, we used large-scale samples of neuronal activity from the auditory cortex, thalamus and inferior colliculus in mice and from a detailed cochlea model to identify key transformations of population representations of simple, short (~500ms) time-varying sounds in the auditory pathway. Using noise-corrected metrics, we measured the similarity of evoked population patterns across sounds. In subcortical regions, this measure shows that the full temporal structure of population activity (sequence code) better separates sounds than time-averaged firing rates (cell identity code). However, in the cortex these two codes converge. This result is not due to a reduced temporal resolution but instead evidences a hybrid coding scheme in which the activity sequences and the identity of active neurons carry redundant information in the cortex. We also observed that deep networks trained to identify sounds or sound attributes show the same convergence of sequence and identity codes in the deeper layers. This suggests the emergence of a cell identity code is key to assign time-varying sounds to particular labels or decisions. In line with this, we found that the cell identity code and not the sequence code determines the speed of associative learning in a reinforcement-learning model with plausible synaptic learning rules. The emergence of the cell identity code in cortex is associated with a sparsenning of single-neuron responses, which follows a dense and correlated code in thalamus. Because the introduction of a reduced-size layer in deep artificial networks leads to denser representations, the dense code in thalamus may result from its position as an anatomical bottleneck. Overall, our results reveal a cortical reformatting of information to generate a cell identity code, which seems crucial for associating sounds to behavior and meaning.