← Back

Auditory

Topic spotlight
TopicWorld Wide

auditory

Discover seminars, jobs, and research tagged with auditory across World Wide.
107 curated items60 Seminars40 ePosters7 Positions
Updated 1 day ago
107 items · auditory
107 results
Position

Professors Yale cohen and Jennifer groh

University of Pennsylvania
Philadelphia, USA
Dec 5, 2025

Yale Cohen (U. Penn; https://auditoryresearchlaboratory.weebly.com/) and Jennifer Groh (Duke U.; www.duke.edu/~jmgroh) seeks a full-time post-doctoral scholar. Our labs study visual, auditory, and multisensory processing in the brain using neurophysiological and computational techniques. We have a newly funded NIH grant to study the contribution of corticofugal connectivity in non-human primate models of auditory perception. The work will take place at the Penn site. This will be a full-time, 12-month renewable appointment. Salary will be commensurate with experience and consistent with NIH NRSA stipends. To apply, send your CV along with contact information for 2 referees to: compneuro@sas.upenn.edu. For questions, please contact Yale Cohen (ycohen@pennmedicine.upenn.edu). Applications will be considered on a rolling basis, and we anticipate a summer 2022 start date. Penn is an Affirmative Action / Equal Opportunity Employer committed to providing employment opportunity without regard to an individual’s age, color, disability, gender, gender expression, gender identity, genetic information, national origin, race, religion, sex, sexual orientation, or veteran status

Position

Melissa Caras

University of Maryland College Park
College Park, Maryland, USA
Dec 5, 2025

We are seeking a highly motivated applicant to join our team as a full-time research technician studying the neural basis of auditory perceptual learning. The successful candidate will be responsible for managing daily laboratory activities, including maintaining the animal colony, ordering supplies, preparing common use solutions, and overseeing lab safety compliance. In addition, the hired applicant will support ongoing projects in the lab by training and testing Mongolian gerbils on auditory detection and discrimination tasks, assisting with or performing survival surgeries, performing perfusions, and processing and imaging histological tissue. The candidate will have the opportunity to gain experience with a number of techniques, including in vivo electrophysiology, pharmacology, fiber photometry, operant conditioning, chemogenetics, and/or optogenetics. This position is an ideal fit for an individual looking to gain independent research experience before applying to graduate or medical school. This a one-year position, with the option to renew for a second year.

Position

Dr. Melissa Caras

University of Maryland
College Park, Maryland, USA
Dec 5, 2025

We are looking for a postdoctoral fellow to study neuromodulatory mechanisms supporting auditory perceptual learning in Mongolian gerbils. The successful applicant will measure and manipulate neuromodulatory release, and assess its impact on cortical activity in freely-moving animals engaged in auditory detection tasks. A variety of techniques will be used, including in vivo multichannel electrophysiology and pharmacology, fiber photometry, novel genetically-encoded fluorescent biosensors, chemogenetics and/or optogenetics. The candidate will be highly involved in all aspects of the research, from design to publication, and will additionally have the opportunity to mentor graduate and undergraduate students.

PositionNeuroscience

Santiago Jaramillo

University of Oregon
Eugene, OR, USA
Dec 5, 2025

The Jaramillo lab investigates the neural basis of expectation, attention, decision-making and learning in the context of sound-driven behaviors in mice. Projects during the postdoctoral fellowship will study these cognitive processes by monitoring and manipulating neuronal activity during adaptive behaviors with cell-type and pathway specificity using techniques such as two-photon microscopy (including mesoscope imaging), high-density electrophysiology (using Neuropixels probes), and optogenetic manipulation of neural activity.

PositionNeuroscience

Prof. Ross Williamson

University of Pittsburgh
Pittsburgh, PA, USA
Dec 5, 2025

The Williamson Laboratory investigates the organization and function of auditory cortical projection systems in behaving mice. We use a variety of state-of-the-art tools to probe the neural circuits of awake mice – these include two-photon calcium imaging and high-channel count electrophysiology (both with single-cell optogenetic perturbations), head-fixed behaviors (including virtual reality), and statistical approaches for neural characterization. Details on the research focus and approaches of the laboratory can be found here: https://www.williamsonlaboratory.com/research/

SeminarNeuroscience

Top-down control of neocortical threat memory

Prof. Dr. Johannes Letzkus
Universität Freiburg, Germany
Nov 11, 2025

Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.

SeminarNeuroscience

Competing Rhythms: Understanding and Modulating Auditory Neural Entrainment

Dr. Yuranny Cabral-Calderin
Freie Universität Berlin, Germany
Oct 7, 2025
SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

The representation of speech conversations in the human auditory cortex

Etienne Abassi
McGill University
Apr 2, 2025
SeminarNeuroscience

Making Sense of Sounds: Cortical Mechanisms for Dynamic Auditory Perception

Maria Geffen
University of Pennsylvania
Mar 23, 2025
SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 28, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarNeuroscience

Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope

Alexis Hervais-Adelman
University of Geneva
May 20, 2024

Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.

SeminarNeuroscienceRecording

Executive functions in the brain of deaf individuals – sensory and language effects

Velia Cardin
UCL
Mar 20, 2024

Executive functions are cognitive processes that allow us to plan, monitor and execute our goals. Using fMRI, we investigated how early deafness influences crossmodal plasticity and the organisation of executive functions in the adult human brain. Results from a range of visual executive function tasks (working memory, task switching, planning, inhibition) show that deaf individuals specifically recruit superior temporal “auditory” regions during task switching. Neural activity in auditory regions predicts behavioural performance during task switching in deaf individuals, highlighting the functional relevance of the observed cortical reorganisation. Furthermore, language grammatical skills were correlated with the level of activation and functional connectivity of fronto-parietal networks. Together, these findings show the interplay between sensory and language experience in the organisation of executive processing in the brain.

SeminarNeuroscience

Dyslexia, Rhythm, Language and the Developing Brain

Usha Goswami CBE
University of Cambridge
Feb 21, 2024

Recent insights from auditory neuroscience provide a new perspective on how the brain encodes speech. Using these recent insights, I will provide an overview of key factors underpinning individual differences in children’s development of language and phonology, providing a context for exploring atypical reading development (dyslexia). Children with dyslexia are relatively insensitive to acoustic cues related to speech rhythm patterns. This lack of rhythmic sensitivity is related to the atypical neural encoding of rhythm patterns in speech by the brain. I will describe our recent data from infants as well as children, demonstrating developmental continuity in the key neural variables.

SeminarNeuroscienceRecording

Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment

Mattia Rosso
Ghent University, IPEM Institute for Systematic Musicology
Nov 28, 2023

Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.

SeminarNeuroscience

Vocal emotion perception at millisecond speed

Ana Pinehiro
University of Lisbon
Oct 16, 2023

The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.

SeminarNeuroscienceRecording

Learning with multimodal enrichment

Katharina von Kriegstein
Technical University Dresden
Oct 4, 2023
SeminarNeuroscienceRecording

Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception

Ashley Schormans
BrainsCAN, Western University, Canada.
Sep 26, 2023

To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.

SeminarNeuroscienceRecording

Internal representation of musical rhythm: transformation from sound to periodic beat

Tomas Lenc
Institute of Neuroscience, UCLouvain, Belgium
May 30, 2023

When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarNeuroscienceRecording

The Effects of Movement Parameters on Time Perception

Keri Anne Gladhill
Florida State University, Tallahassee, Florida.
May 30, 2023

Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarPsychology

The speaker identification ability of blind and sighted listeners

Almut Braun
Bundeskriminalamt, Wiesbaden
Feb 21, 2023

Previous studies have shown that blind individuals outperform sighted controls in a variety of auditory tasks; however, only few studies have investigated blind listeners’ speaker identification abilities. In addition, existing studies in the area show conflicting results. The presented empirical investigation with 153 blind (74 of them congenitally blind) and 153 sighted listeners is the first of its kind and scale in which long-term memory effects of blind listeners’ speaker identification abilities are examined. For the empirical investigation, all listeners were evenly assigned to one of nine subgroups (3 x 3 design) in order to investigate the influence of two parameters with three levels, respectively, on blind and sighted listeners’ speaker identification performance. The parameters were a) time interval; i.e. a time interval of 1, 3 or 6 weeks between the first exposure to the voice to be recognised (familiarisation) and the speaker identification task (voice lineup); and b) signal quality; i.e. voice recordings were presented in either studio-quality, mobile phone-quality or as recordings of whispered speech. Half of the presented voice lineups were target-present lineups in which the previously heard target voice was included. The other half consisted of target-absent lineups which contained solely distractor voices. Blind individuals outperformed sighted listeners only under studio quality conditions. Furthermore, for blind and sighted listeners no significant performance differences were found with regard to the three investigated time intervals of 1, 3 and 6 weeks. Blind as well as sighted listeners were significantly better at picking the target voice from target-present lineups than at indicating that the target voice was absent in target-absent lineups. Within the blind group, no significant correlations were found between identification performance and onset or duration of blindness. Implications for the field of forensic phonetics are discussed.

SeminarNeuroscienceRecording

Sampling the environment with body-brain rhythms

Antonio Criscuolo
Maastricht University
Jan 24, 2023

Since Darwin, comparative research has shown that most animals share basic timing capacities, such as the ability to process temporal regularities and produce rhythmic behaviors. What seems to be more exclusive, however, are the capacities to generate temporal predictions and to display anticipatory behavior at salient time points. These abilities are associated with subcortical structures like basal ganglia (BG) and cerebellum (CE), which are more developed in humans as compared to nonhuman animals. In the first research line, we investigated the basic capacities to extract temporal regularities from the acoustic environment and produce temporal predictions. We did so by adopting a comparative and translational approach, thus making use of a unique EEG dataset including 2 macaque monkeys, 20 healthy young, 11 healthy old participants and 22 stroke patients, 11 with focal lesions in the BG and 11 in the CE. In the second research line, we holistically explore the functional relevance of body-brain physiological interactions in human behavior. Thus, a series of planned studies investigate the functional mechanisms by which body signals (e.g., respiratory and cardiac rhythms) interact with and modulate neurocognitive functions from rest and sleep states to action and perception. This project supports the effort towards individual profiling: are individuals’ timing capacities (e.g., rhythm perception and production), and general behavior (e.g., individual walking and speaking rates) influenced / shaped by body-brain interactions?

SeminarNeuroscienceRecording

Flexible selection of task-relevant features through population gating

Joao Barbosa
Ostojic lab, Ecole Normale Superieure
Dec 6, 2022

Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within A1 but controlled by top-down inputs from mPFC, can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.

SeminarNeuroscienceRecording

Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing

Viola Störmer
Dartmouth College
Nov 30, 2022

Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.

SeminarNeuroscienceRecording

A premotor amodal clock for rhythmic tapping

Hugo Merchant
National Autonomous University of Mexico
Nov 22, 2022

We recorded and analyzed the population activity of hundreds of neurons in the medial premotor areas (MPC) of rhesus monkeys performing an isochronous tapping task guided by brief flashing stimuli or auditory tones. The animals showed a strong bias towards visual metronomes, with rhythmic tapping that was more precise and accurate than for auditory metronomes. The population dynamics in state space as well as the corresponding neural sequences shared the following properties across modalities: the circular dynamics of the neural trajectories and the neural sequences formed a regenerating loop for every produced interval, producing a relative time representation; the trajectories converged in similar state space at tapping times while the moving bumps restart at this point, resetting the beat-based clock; the tempo of the synchronized tapping was encoded by a combination of amplitude modulation and temporal scaling in the neural trajectories. In addition, the modality induced a displacement in the neural trajectories in auditory and visual subspaces without greatly altering time keeping mechanism. These results suggest that the interaction between the amodal internal representation of pulse within MPC and a modality specific external input generates a neural rhythmic clock whose dynamics define the temporal execution of tapping using auditory and visual metronomes.

SeminarNeuroscienceRecording

Representations of people in the brain

Lucia Garrido
City, University of London
Nov 21, 2022

Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscienceRecording

Pitch and Time Interact in Auditory Perception

Jesse Pazdera
McMaster University, Canada
Oct 25, 2022

Research into pitch perception and time perception has typically treated the two as independent processes. However, previous studies of music and speech perception have suggested that pitch and timing information may be processed in an integrated manner, such that the pitch of an auditory stimulus can influence a person’s perception, expectation, and memory of its duration and tempo. Typically, higher-pitched sounds are perceived as faster and longer in duration than lower-pitched sounds with identical timing. We conducted a series of experiments to better understand the limits of this pitch-time integrality. Across several experiments, we tested whether the higher-equals-faster illusion generalizes across the broader frequency range of human hearing by asking participants to compare the tempo of a repeating tone played in one of six octaves to a metronomic standard. When participants heard tones from all six octaves, we consistently found an inverted U-shaped effect of the tone’s pitch height, such that perceived tempo peaked between A4 (440 Hz) and A5 (880 Hz) and decreased at lower and higher octaves. However, we found that the decrease in perceived tempo at extremely high octaves could be abolished by exposing participants to high-pitched tones only, suggesting that pitch-induced timing biases are context sensitive. We additionally tested how the timing of an auditory stimulus influences the perception of its pitch, using a pitch discrimination task in which probe tones occurred early, late, or on the beat within a rhythmic context. Probe timing strongly biased participants to rate later tones as lower in pitch than earlier tones. Together, these results suggest that pitch and time exert a bidirectional influence on one another, providing evidence for integrated processing of pitch and timing information in auditory perception. Identifying the mechanisms behind this pitch-time interaction will be critical for integrating current models of pitch and tempo processing.

SeminarNeuroscience

Language Representations in the Human Brain: A naturalistic approach

Fatma Deniz
TU Berlin & Berkeley
Apr 26, 2022

Natural language is strongly context-dependent and can be perceived through different sensory modalities. For example, humans can easily comprehend the meaning of complex narratives presented through auditory speech, written text, or visual images. To understand how complex language-related information is represented in the human brain there is a necessity to map the different linguistic and non-linguistic information perceived under different modalities across the cerebral cortex. To map this information to the brain, I suggest following a naturalistic approach and observing the human brain performing tasks in its naturalistic setting, designing quantitative models that transform real-world stimuli into specific hypothesis-related features, and building predictive models that can relate these features to brain responses. In my talk, I will present models of brain responses collected using functional magnetic resonance imaging while human participants listened to or read natural narrative stories. Using natural text and vector representations derived from natural language processing tools I will present how we can study language processing in the human brain across modalities, in different levels of temporal granularity, and across different languages.

SeminarNeuroscience

Unravelling bistable perception from human intracranial recordings

Rodica Curtu
UIOWA
Apr 5, 2022

Discovering dynamical patterns from high fidelity timeseries is typically a challenging task. In this talk, the timeseries data consist of neural recordings taken from the auditory cortex of human subjects who listened to sequences of repeated triplets of tones and reported their perception by pressing a button. Subjects reported spontaneous alternations between two auditory perceptual states (1-stream and 2-streams). We discuss a data-driven method, which leverages time-delayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features that correlated with subject-reported switching between perceptual states.

SeminarNeuroscience

Rhythms in sounds and rhythms in brains: the temporal structure of auditory comprehension

David Poeppel
Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt
Feb 9, 2022
SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 8, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscience

Heartbeat-based auditory regularities induce prediction in human wakefulness and sleep

Marzia de Lucia
Laboratoire de Recherche en Neuroimagerie (LREN), University Hospital (CHUV) and University of Lausanne (UNIL)
Feb 7, 2022

Exposure to sensory regularities in the environment induces the human brain to form expectations about incoming stimuli and remains partially preserved in the absence of consciousness (i.e. coma and sleep). While regularity often refers to stimuli presented at a fixed pace, we recently explored whether auditory prediction extends to pseudo-regular sequences where sensory prediction is induced by locking sound onsets to heartbeat signals and whether it can occur across vigilance states. In a series of experiments in healthy volunteers, we found neural and cardiac evidence of auditory prediction during heartbeat-based auditory regularities in wakefulness and N2 sleep. This process could represent an important mechanism for detecting unexpected stimuli in the environment even in states of limited conscious and attentional resources.

SeminarNeuroscienceRecording

Interpersonal synchrony of body/brain, Solo & Team Flow

Shinsuke Shimojo
California Institute of Technology
Jan 27, 2022

Flow is defined as an altered state of consciousness with excessive attention and enormous sense of pleasure, when engaged in a challenging task, first postulated by a psychologist, the late M. Csikszentmihayli. The main focus of this talk will be “Team Flow,” but there were two lines of previous studies in our laboratory as its background. First is inter-body and inter-brain coordination/synchrony between individuals. Considering various rhythmic echoing/synchronization phenomena in animal behavior, it could be regarded as the biological, sub-symbolic and implicit origin of social interactions. The second line of precursor research is on the state of Solo Flow in game playing. We employed attenuation of AEP (Auditory Evoked Potential) to task-irrelevant sound probes as an objective-neural indicator of such a Flow status, and found that; 1) Mutual link between the ACC & the TP is critical, and 2) overall, top-down influence is enhanced while bottom-up causality is attenuated. Having these as the background, I will present our latest study of Team Flow in game playing. We found that; 3) the neural correlates of Team Flow is distinctively different from those of Solo Flow nor of non-flow social, 4) the left medial temporal cortex seems to form an integrative node for Team Flow, receiving input related to Solo Flow state from the right PFC and input related to social state from the right IFC, and 5) Intra-brain (dis)similarity of brain activity well predicts (dis)similarity of skills/cognition as well as affinity for inter-brain coherence.

SeminarNeuroscience

Hearing in an acoustically varied world

Kerry Walker
University of Oxford
Jan 24, 2022

In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.

SeminarNeuroscience

Neural oscillatory models of auditory-motor interactions

Johanna Rimmele
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main
Jan 16, 2022
SeminarNeuroscienceRecording

Decoding sounds in early visual cortex of sighted and blind individuals

Petra Vetter
University of Fribourg, Switzerland
Dec 8, 2021
SeminarNeuroscienceRecording

How does seeing help listening? Audiovisual integration in Auditory Cortex

Jennifer Bizley
University College London
Dec 1, 2021

Multisensory responses are ubiquitous in so-called unisensory cortex. However, despite their prevalence, we have very little understanding of what – if anything - they contribute to perception. In this talk I will focus on audio-visual integration in auditory cortex. Anatomical tracing studies highlight visual cortex as one source of visual input to auditory cortex. Using cortical cooling we test the hypothesis that these inputs support audiovisual integration in ferret auditory cortex. Behavioural studies in humans support the idea that visual stimuli can help listeners to parse an auditory scene. This effect is paralleled in single units in auditory cortex, where responses to a sound mixture can be determined by the timing of a visual stimulus such that sounds that are temporally coherent with a visual stimulus are preferentially represented. Our recent data therefore support the idea that one role for the early integration of auditory and visual signals in auditory cortex is to support auditory scene analysis, and that visual cortex plays a key role in this process.

SeminarNeuroscienceRecording

Space and its computational challenges

Jennifer Groh
Duke University
Nov 17, 2021

How our senses work both separately and together involves rich computational problems. I will discuss the spatial and representational problems faced by the visual and auditory system, focusing on two issues. 1. How does the brain correct for discrepancies in the visual and auditory spatial reference frames? I will describe our recent discovery of a novel type of otoacoustic emission, the eye movement related eardrum oscillation, or EMREO (Gruters et al, PNAS 2018). 2. How does the brain encode more than one stimulus at a time? I will discuss evidence for neural time-division multiplexing, in which neural activity fluctuates across time to allow representations to encode more than one simultaneous stimulus (Caruso et al, Nat Comm 2018). These findings all emerged from experimentally testing computational models regarding spatial representations and their transformations within and across sensory pathways. Further, they speak to several general problems confronting modern neuroscience such as the hierarchical organization of brain pathways and limits on perceptual/cognitive processing.

SeminarNeuroscience

Looking and listening while moving

Tom Freeman
Cardiff University
Nov 16, 2021

In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.

SeminarNeuroscienceRecording

What is the function of auditory cortex when it develops in the absence of acoustic input?

Steve Lomber
McGill University
Oct 13, 2021

Cortical plasticity is the neural mechanism by which the cerebrum adapts itself to its environment, while at the same time making it vulnerable to impoverished sensory or developmental experiences. Like the visual system, auditory development passes through a series of sensitive periods in which circuits and connections are established and then refined by experience. Current research is expanding our understanding of cerebral processing and organization in the deaf. In the congenitally deaf, higher-order areas of "deaf" auditory cortex demonstrate significant crossmodal plasticity with neurons responding to visual and somatosensory stimuli. This crucial cerebral function results in compensatory plasticity. Not only can the remaining inputs reorganize to substitute for those lost, but this additional circuitry also confers enhanced abilities to the remaining systems. In this presentation we will review our present understanding of the structure and function of “deaf” auditory cortex using psychophysical, electrophysiological, and connectional anatomy approaches and consider how this knowledge informs our expectations of the capabilities of cochlear implants in the developing brain.

SeminarNeuroscienceRecording

Do you hear what I see: Auditory motion processing in blind individuals

Ione Fine
University of Washington
Oct 6, 2021

Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.

SeminarNeuroscienceRecording

Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise

Monty Escabi
University of Connecticut
Sep 30, 2021

Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.

SeminarNeuroscienceRecording

Expectation of self-generated sounds drives predictive processing in mouse auditory cortex

Nick Audette
Schneider lab, New York University
Sep 21, 2021

Sensory stimuli are often predictable consequences of one’s actions, and behavior exerts a correspondingly strong influence over sensory responses in the brain. Closed-loop experiments with the ability to control the sensory outcomes of specific animal behaviors have revealed that neural responses to self-generated sounds are suppressed in the auditory cortex, suggesting a role for prediction in local sensory processing. However, it is unclear whether this phenomenon derives from a precise movement-based prediction or how it affects the neural representation of incoming stimuli. We address these questions by designing a behavioral paradigm where mice learn to expect the predictable acoustic consequences of a simple forelimb movement. Neuronal recordings from auditory cortex revealed suppression of neural responses that was strongest for the expected tone and specific to the time of the sound-associated movement. Predictive suppression in the auditory cortex was layer-specific, preceded by the arrival of movement information, and unaffected by behavioral relevance or reward association. These findings illustrate that expectation, learned through motor-sensory experience, drives layer-specific predictive processing in the mouse auditory cortex.

SeminarNeuroscience

Understanding Perceptual Priors with Massive Online Experiments

Nori Jacoby
Max Planck for empirical Aesthetics
Jul 13, 2021

One of the most important questions in psychology and neuroscience is understanding how the outside world maps to internal representations. Classical psychophysics approaches to this problem have a number of limitations: they mostly study low dimensional perpetual spaces, and are constrained in the number and diversity of participants and experiments. As ecologically valid perception is rich, high dimensional, contextual, and culturally dependent, these impediments severely bias our understanding of perceptual representations. Recent technological advances—the emergence of so-called “Virtual Labs”— can significantly contribute toward overcoming these barriers. Here I present a number of specific strategies that my group has developed in order to probe representations across a number of dimensions. 1) Massive online experiments can increase significantly the amount of participants and experiments that can be carried out in a single study, while also significantly diversifying the participant pool. We have developed a platform, PsyNet, that enables “experiments as code,” whereby the orchestration of computer servers, recruiting, compensation of participants, and data management is fully automated and every experiment can be fully replicated with one command line. I will demonstrate how PsyNet allows us to recruit thousands of participants for each study with a large number of control experimental conditions, significantly increasing our understanding of auditory perception. 2) Virtual lab methods also enable us to run experiments that are nearly impossible in a traditional lab setting. I will demonstrate our development of adaptive sampling, a set of behavioural methods that combine machine learning sampling techniques (Monte Carlo Markov Chains) with human interactions and allow us to create high-dimensional maps of perceptual representations with unprecedented resolution. 3) Finally, I will demonstrate how the aforementioned methods can be applied to the study of perceptual priors in both audition and vision, with a focus on our work in cross-cultural research, which studies how perceptual priors are influenced by experience and culture in diverse samples of participants from around the world.

SeminarOpen SourceRecording

Feeding Exprementation Device ver3 (FED3)

Lex Kravitz
Washington University
Jun 3, 2021

FED3 is a device for behavioral training of mice in vivarium home-cages. Mice interact with FED3 through two nose-pokes and FED3 responds with visual stimuli, auditory stimuli, and by dispensing pellets. As it is used in the home-cage FED3 can be used for around-the-clock training of mice over several weeks. FED3 is open-source and can be built by users for ~10-20x less than commercial solutions for training mice. The control code is also open-source and was designed to be easily modified by users.

SeminarNeuroscience

Temporal processing in the auditory thalamocortical system

Tania R. Barkat
Basel University, Switzerland
May 30, 2021
SeminarNeuroscience

Learning to perceive with new sensory signals

Marko Nardini
Durham University
May 18, 2021

I will begin by describing recent research taking a new, model-based approach to perceptual development. This approach uncovers fundamental changes in information processing underlying the protracted development of perception, action, and decision-making in childhood. For example, integration of multiple sensory estimates via reliability-weighted averaging – widely used by adults to improve perception – is often not seen until surprisingly late into childhood, as assessed by both behaviour and neural representations. This approach forms the basis for a newer question: the scope for the nervous system to deploy useful computations (e.g. reliability-weighted averaging) to optimise perception and action using newly-learned sensory signals provided by technology. Our initial model system is augmenting visual depth perception with devices translating distance into auditory or vibro-tactile signals. This problem has immediate applications to people with partial vision loss, but the broader question concerns our scope to use technology to tune in to any signal not available to our native biological receptors. I will describe initial progress on this problem, and our approach to operationalising what it might mean to adopt a new signal comparably to a native sense. This will include testing for its integration (weighted averaging) alongside the native senses, assessing the level at which this integration happens in the brain, and measuring the degree of ‘automaticity’ with which new signals are used, compared with native perception.

SeminarNeuroscienceRecording

Direction selectivity in hearing: monaural phase sensitivity in octopus neurons

Philip Joris
KU Leuven
May 16, 2021

The processing of temporal sound features is fundamental to hearing, and the auditory system displays a plethora of specializations, at many levels, to enable such processing. Octopus neurons are the most extreme temporally-specialized cells in the auditory (and perhaps entire) brain, which make them intriguing but also difficult to study. Notwithstanding the scant physiological data, these neurons have been a favorite cell type of modeling studies which have proposed that octopus cells have critical roles in pitch and speech perception. We used a range of in vivo recording and labeling methods to examine the hypothesis that tonotopic ordering of cochlear afferents combines with dendritic delays to compensate for cochlear delay - which would explain the highly entrained responses of octopus cells to sound transients. Unexpectedly, the experiments revealed that these neurons have marked selectivity to the direction of fast frequency glides, which is tied in a surprising way to intrinsic membrane properties and subthreshold events. The data suggest that octopus cells have a role in temporal comparisons across frequency and may play a role in auditory scene analysis.

SeminarNeuroscienceRecording

Networks for multi-sensory attention and working memory

Barbara Shinn-Cunningham
Carnegie Mellon University
May 12, 2021

Converging evidence from fMRI and EEG shows that audtiory spatial attention engages the same fronto-parietal network associated with visuo-spatial attention. This network is distinct from an auditory-biased processing network that includes other frontal regions; this second network is can be recruited when observers extract rhythmic information from visual inputs. We recently used a dual-task paradigm to examine whether this "division of labor" between a visuo-spatial network and an auditory-rhythmic network can be observed in a working memory paradigm. We varied the sensory modality (visual vs. auditory) and information domain (spatial or rhythmic) that observers had to store in working memory, while also performing an intervening task. Behavior, pupilometry, and EEG results show a complex interaction across the working memory and intervening tasks, consistent with two cognitive control networks managing auditory and visual inputs based on the kind of information being processed.

SeminarNeuroscience

Dynamics of the mouse auditory cortex and the perception of sound

Simon Rumpel
Johannes Gutenberg University Mainz
May 9, 2021
SeminarNeuroscienceRecording

Error correction and reliability timescale in converging cortical networks

Eran Stark
Tel Aviv University
Apr 28, 2021

Rapidly changing inputs such as visual scenes and auditory landscapes are transmitted over several synaptic interfaces and perceived with little loss of detail, but individual neurons are typically “noisy” and cortico-cortical connections are typically “weak”. To understand how information embodied in spike train is transmitted in a lossless manner, we focus on a single synaptic interface: between pyramidal cells and putative interneurons. Using arbitrary white noise patterns injected intra-cortically as photocurrents to freely-moving mice, we find that directly-activated cells exhibit precision of several milliseconds, but post-synaptic, indirectly-activated cells exhibit higher precision. Considering multiple identical messages, the reliability of directly-activated cells peaks at a timescale of dozens of milliseconds, whereas indirectly-activated cells exhibit an order-of-magnitude faster timescale. Using data-driven modelling, we find that error correction is consistent with non-linear amplification of coincident spikes.

SeminarNeuroscience

Reflections of action, expectation, and experience in mouse auditory cortex

David Schneider
New York University
Apr 11, 2021
SeminarNeuroscience

Cortical and sub-cortical circuits for auditory perception and learning

Maria Geffen
University of Pennsilvania
Mar 21, 2021
SeminarNeuroscienceRecording

A Cortical Circuit for Audio-Visual Predictions

Aleena Garner
Keller lab, FMI
Mar 9, 2021

Team work makes sensory streams work: our senses work together, learn from each other, and stand in for one another, the result of which is perception and understanding. Learned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli (Mcgurk and Macdonald, 1976). During audio-visual associative learning, auditory cortex is thought to underlie multi-modal plasticity in visual cortex (McIntosh et al., 1998; Mishra et al., 2007; Zangenehpour and Zatorre, 2010). However, it is not well understood how processing in visual cortex is altered by an auditory stimulus that is predictive of a visual stimulus and what the mechanisms are that mediate such experience-dependent, audio-visual associations in sensory cortex. Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices. We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to an experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically-matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons responsive to the associated visual stimulus after, but not before, learning. Our results suggest that cross-modal associations can be stored in long-range cortical connections and that with learning these cross-modal connections function to suppress the responses to predictable input.

SeminarNeuroscienceRecording

Distinct forms of cortical plasticity underlie difficulties to reliably detect sounds in noisy environments"; "Acoustic context modulates natural sound discrimination in auditory cortex through frequency specific adaptation

Dr. Jennifer Resnik; Dr. Julio Hechavarria
Ben-Gurion University; Goethe University
Feb 22, 2021
ePoster

Exploring the neuroprotective effect of auditory enhanced slow-wave sleep in a mouse model of Alzheimer’s disease

Inês Dias, Irena Barbaric, Vera Gysin, Christian Baumann, Sedef Kollarik, Daniela Noain

FENS Forum 2024

ePoster

Auditory cortex represents an abstract sensorimotor rule

COSYNE 2022

ePoster

Clear evidence in favor of adaptation and against temporally specific predictive suppression in monkey primary auditory cortex

COSYNE 2022

ePoster

Experience early in auditory conditioning impacts across-animal variability in neural tuning

COSYNE 2022

ePoster

Many, but not all, deep neural network audio models predict auditory cortex responses and exhibit hierarchical layer-region correspondence

COSYNE 2022

ePoster

Many, but not all, deep neural network audio models predict auditory cortex responses and exhibit hierarchical layer-region correspondence

COSYNE 2022

ePoster

Mechanisms of plasticity for pup call sounds in the maternal auditory cortex

COSYNE 2022

ePoster

Mechanisms of plasticity for pup call sounds in the maternal auditory cortex

COSYNE 2022

ePoster

Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals

COSYNE 2022

ePoster

Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals

COSYNE 2022

ePoster

Predictive coding of global sequence violation in the mouse auditory cortex

COSYNE 2022

ePoster

Predictive coding of global sequence violation in the mouse auditory cortex

COSYNE 2022

ePoster

Simultaneous mnemonic and predictive representations in the auditory cortex

COSYNE 2022

ePoster

Simultaneous mnemonic and predictive representations in the auditory cortex

COSYNE 2022

ePoster

Synaptic and mesoscale plasticity in auditory cortex of rats with cochlear implants

COSYNE 2022

ePoster

Synaptic and mesoscale plasticity in auditory cortex of rats with cochlear implants

COSYNE 2022

ePoster

Transformation of population representations of sounds throughout the auditory system

COSYNE 2022

ePoster

Transformation of population representations of sounds throughout the auditory system

COSYNE 2022

ePoster

Clustered representation of vocalizations in the auditory midbrain of the echolocating bat

Jennifer Lawlor, Melville Wohlgemuth, Cynthia F. Moss, Kishore Kuchibhotla

COSYNE 2023

ePoster

Flexible boolean computation by auditory neurons

Grace Ang & Andriy Kozlov

COSYNE 2023

ePoster

Neural mechanisms of stream formation during active listening in the ferret auditory cortex

Jules Lebert, Carla Griffiths, Joseph Sollini, Jennifer Bizley

COSYNE 2023

ePoster

Normative modeling of auditory memory for natural sounds

Bryan Medina & Josh McDermott

COSYNE 2023

ePoster

Predictive dynamics improve noise robustness in a deep network model of the human auditory system

Ching Fang, Erica Shook, Justin Buck, Guillermo Horga

COSYNE 2023

ePoster

Towards encoding models for auditory cortical implants

Antonin Verdier & Brice Bathellier

COSYNE 2023

ePoster

Understanding Auditory Cortex with Deep Neural Networks

Bilal Ahmed, Brian Malone, Joseph Makin

COSYNE 2023

ePoster

Auditory cortical manifold for natural soundscapes enables neurally aligned category decoding

Satyabrata Parida, Jereme Wingert, Jonah Stickney, Samuel Norman-Haignere, Stephen David

COSYNE 2025

ePoster

Convolutional neural networks describe encoding subspaces of local circuits in auditory cortex

Stephen David, Samuel Norman-Haignere, Jereme Wingert, Satyabrata Parida

COSYNE 2025

ePoster

Envelope representations substantially enhance the predictive power of spectrotemporal receptive models in the human auditory cortex

Guoyang Liao, Samuel Norman-Haignere, Dana Boebinger, Christopher Garcia, Kirill Nourski, Matthew Howard, Thomas Wychowski, Webster Pilcher

COSYNE 2025

ePoster

Frontal and sensory cortical dynamics during flexible auditory behavior

Tiange Hou, Blake Sidleck, Danyall Saeed, Guoning Yu, Michele Insanally

COSYNE 2025

ePoster

Instinct vs Insight: Neural Competition Between Prefrontal and Auditory Cortex Constrains Sound Strategy Learning

Robert Liu, Kelvin Wong, Chengcheng Yang, Lin Zhou, Yike Shi, Maya Costello, Kai Lu

COSYNE 2025

ePoster

Place and reward coding of a remote auditory signal in the hippocampus

Shiladitya Laskar, Shai Abramson, Ohad Rechnitz, Genela Morris, Dori Derdikman

COSYNE 2025

ePoster

Time-yoked integration throughout human auditory cortex

Samuel Norman-Haignere, Menoua Keshishian, Orrin Devinsky, Werner Doyle, Guy McKhann, Catherine Schevon, Adeen Flinker, Nima Mesgarani

COSYNE 2025

ePoster

Activity patterns of corticocollicular neurons during auditory learning

Kira Andrea, Jan J. Hirtz

FENS Forum 2024

ePoster

The alanine-serine-cysteine-1 transporter (Asc1) provides glycine at fast inhibitory auditory brainstem synapses

Lina Hofmann, Eckhard Friauf

FENS Forum 2024

ePoster

Ambient sound stimulation regulates radial growth of myelin and tunes axonal conduction velocity in the auditory pathway

Mihai Stancu, Hilde Wohlfrom, Martin Heß, Benedikt Grothe, Christian Leibold, Conny Kopp-Scheinpflug

FENS Forum 2024

ePoster

Auditory stimuli reduce fear responses in a safety learning protocol independent of a possible learning process

Elena Mombelli, Denys Osypenko, Shriya Palchaudhuri, Christos Sourmpis, Johanni Brea, Olexiy Kochubey, Ralf Schneggenburger

FENS Forum 2024

ePoster

How the auditory brainstem of bats detects regularity deviations in a naturalistic stimulation paradigm

Johannes Wetekam, Julio Hechavarría, Luciana López-Jury, Eugenia González-Palomares, Manfred Kössl

FENS Forum 2024

ePoster

Auditory cortex activity during sound memory retention in an auditory working memory task

Elena Kudryavitskaya, Brice Bathellier

FENS Forum 2024

ePoster

Auditory cortex control of vocalization

Wei Tang, Miguel Alejandro Concha-Miranda, Michael Brecht

FENS Forum 2024

ePoster

Bootstrapping the auditory space map via an innate circuit

Yang Chu, Wayne Luk, Dan Goodman

Bernstein Conference 2024