← Back

Hearing

Topic spotlight
TopicWorld Wide

hearing

Discover seminars, jobs, and research tagged with hearing across World Wide.
32 curated items16 Seminars14 ePosters2 Positions
Updated 1 day ago
32 items · hearing
32 results
Position

Professors Yale cohen and Jennifer groh

University of Pennsylvania
Philadelphia, USA
Dec 5, 2025

Yale Cohen (U. Penn; https://auditoryresearchlaboratory.weebly.com/) and Jennifer Groh (Duke U.; www.duke.edu/~jmgroh) seeks a full-time post-doctoral scholar. Our labs study visual, auditory, and multisensory processing in the brain using neurophysiological and computational techniques. We have a newly funded NIH grant to study the contribution of corticofugal connectivity in non-human primate models of auditory perception. The work will take place at the Penn site. This will be a full-time, 12-month renewable appointment. Salary will be commensurate with experience and consistent with NIH NRSA stipends. To apply, send your CV along with contact information for 2 referees to: compneuro@sas.upenn.edu. For questions, please contact Yale Cohen (ycohen@pennmedicine.upenn.edu). Applications will be considered on a rolling basis, and we anticipate a summer 2022 start date. Penn is an Affirmative Action / Equal Opportunity Employer committed to providing employment opportunity without regard to an individual’s age, color, disability, gender, gender expression, gender identity, genetic information, national origin, race, religion, sex, sexual orientation, or veteran status

Position

Dr. Stéphane Maison

Massachusetts Eye & Ear – Harvard Medical School
Massachusetts Eye & Ear – Harvard Medical School
Dec 5, 2025

A NIH-funded postdoctoral position is immediately available in Dr. Stéphane Maison’s laboratory in the Department of Otolaryngology – Head & Neck Surgery at the Massachusetts Eye & Ear – Harvard Medical School. Our research interests focus on identifying biomarkers of a large range of etiologies and their associated disorders including difficulties hearing and communicating in noisy environments, reduced sound level tolerance and tinnitus using a test battery based on behavioral, electrophysiologic, and psychophysical measures. Salary and benefits are consistent with NIH guidelines and institution policies based on applicant’s experience. Highly motivated candidates who recently graduated with a PhD in biomedical engineering, computational biology, hearing science, neuroscience, or other related fields are welcome to apply. The applicant should have strong programming skills (e.g., Matlab, Python), independent and productive. Experience with human testing is preferred but not required. The fellow will receive an appointment at Massachusetts Eye and Ear and Harvard Medical School. Interested applicants should apply using the following link: https://partners.taleo.net/careersection/mee/jobdetail.ftl?job=3299643&tz=GMT-04%3A00&tzname=America%2FNew_York

SeminarNeuroscience

SSFN Webinar - Hearing Research

Anders Fridberger
Linköping University
Mar 27, 2025
SeminarNeuroscience

Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope

Alexis Hervais-Adelman
University of Geneva
May 20, 2024

Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.

SeminarNeuroscience

Gene therapy for hearing loss: where do we go from ear?

Christopher Cederroth
HNO at University Hospital Tübingen
Nov 1, 2023
SeminarNeuroscienceRecording

Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception

Ashley Schormans
BrainsCAN, Western University, Canada.
Sep 26, 2023

To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.

SeminarNeuroscience

How the brain uses experience to construct its multisensory capabilities

Barry E. Stein
Wake Forest School of Medicine
Apr 19, 2023

This talk will not be recorded

SeminarNeuroscienceRecording

Pitch and Time Interact in Auditory Perception

Jesse Pazdera
McMaster University, Canada
Oct 25, 2022

Research into pitch perception and time perception has typically treated the two as independent processes. However, previous studies of music and speech perception have suggested that pitch and timing information may be processed in an integrated manner, such that the pitch of an auditory stimulus can influence a person’s perception, expectation, and memory of its duration and tempo. Typically, higher-pitched sounds are perceived as faster and longer in duration than lower-pitched sounds with identical timing. We conducted a series of experiments to better understand the limits of this pitch-time integrality. Across several experiments, we tested whether the higher-equals-faster illusion generalizes across the broader frequency range of human hearing by asking participants to compare the tempo of a repeating tone played in one of six octaves to a metronomic standard. When participants heard tones from all six octaves, we consistently found an inverted U-shaped effect of the tone’s pitch height, such that perceived tempo peaked between A4 (440 Hz) and A5 (880 Hz) and decreased at lower and higher octaves. However, we found that the decrease in perceived tempo at extremely high octaves could be abolished by exposing participants to high-pitched tones only, suggesting that pitch-induced timing biases are context sensitive. We additionally tested how the timing of an auditory stimulus influences the perception of its pitch, using a pitch discrimination task in which probe tones occurred early, late, or on the beat within a rhythmic context. Probe timing strongly biased participants to rate later tones as lower in pitch than earlier tones. Together, these results suggest that pitch and time exert a bidirectional influence on one another, providing evidence for integrated processing of pitch and timing information in auditory perception. Identifying the mechanisms behind this pitch-time interaction will be critical for integrating current models of pitch and tempo processing.

SeminarNeuroscienceRecording

Designing the BEARS (Both Ears) Virtual Reality Training Package to Improve Spatial Hearing in Young People with Bilateral Cochlear Implant

Deborah Vickers
Clinical Neurosciences
Oct 10, 2022

Results: the main areas which were modified based on participatory feedback were the variety of immersive scenarios to cover a range of ages and interests, the number of levels of complexity to ensure small improvements were measured, the feedback and reward schemes to ensure positive reinforcement, and specific provision for participants with balance issues, who had difficulties when using head-mounted displays. The effectiveness of the finalised BEARS suite will be evaluated in a large-scale clinical trial. We have added in additional login options for other members of the family and based on patient feedback we have improved the accompanying reward schemes. Conclusions: Through participatory design we have developed a training package (BEARS) for young people with bilateral cochlear implants. The training games are appropriate for use by the study population and ultimately should lead to patients taking control of their own management and reducing the reliance upon outpatient-based rehabilitation programmes. Virtual reality training provides a more relevant and engaging approach to rehabilitation for young people.

SeminarNeuroscience

Hearing in an acoustically varied world

Kerry Walker
University of Oxford
Jan 24, 2022

In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.

SeminarNeuroscience

Looking and listening while moving

Tom Freeman
Cardiff University
Nov 16, 2021

In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.

SeminarPsychology

Enhanced perception and cognition in deaf sign language users: EEG and behavioral evidence

Lorna Quandt
Gallaudet University
Aug 18, 2021

In this talk, Dr. Quandt will share results from behavioral and cognitive neuroscience studies from the past few years of her work in the Action & Brain Lab, an EEG lab at Gallaudet University, the world's premiere university for deaf and hard-of-hearing students. These results will center upon the question of how extensive knowledge of signed language changes, and in some cases enhances, people's perception and cognition. Evidence for this effect comes from studies of human biological motion using point light displays, self-report, and studies of action perception. Dr. Quandt will also discuss some of the lab's efforts in designing and testing a virtual reality environment in which users can learn American Sign Language from signing avatars (virtual humans).

SeminarNeuroscienceRecording

Direction selectivity in hearing: monaural phase sensitivity in octopus neurons

Philip Joris
KU Leuven
May 16, 2021

The processing of temporal sound features is fundamental to hearing, and the auditory system displays a plethora of specializations, at many levels, to enable such processing. Octopus neurons are the most extreme temporally-specialized cells in the auditory (and perhaps entire) brain, which make them intriguing but also difficult to study. Notwithstanding the scant physiological data, these neurons have been a favorite cell type of modeling studies which have proposed that octopus cells have critical roles in pitch and speech perception. We used a range of in vivo recording and labeling methods to examine the hypothesis that tonotopic ordering of cochlear afferents combines with dendritic delays to compensate for cochlear delay - which would explain the highly entrained responses of octopus cells to sound transients. Unexpectedly, the experiments revealed that these neurons have marked selectivity to the direction of fast frequency glides, which is tied in a surprising way to intrinsic membrane properties and subthreshold events. The data suggest that octopus cells have a role in temporal comparisons across frequency and may play a role in auditory scene analysis.

SeminarNeuroscienceRecording

Untitled Seminar

Marta Andres Miguel
UCL
Sep 22, 2020
SeminarNeuroscienceRecording

The active modulation of sound and vibration perception

Natasha Mhatre
University of Western Ontario
Jun 16, 2020

The dominant view of perception right now is that information travels from the environment to the sensory system, then to the nervous systems which processes it to generate a percept and behaviour. Ongoing behaviour is thought to occur largely through simple iterations of this process. However, this linear view, where information flows only in one direction and the properties of the environment and the sensory system remain static and unaffected by behaviour, is slowly fading. Many of us are beginning to appreciate that perception is largely active, i.e. that information flows back and forth between the three systems modulating their respective properties. In other words, in the real world, the environment and sensorimotor loop is pretty much always closed. I study the loop; in particular I study how the reverse arm of the loop affects sound and vibration perception. I will present two examples of motor modulation of perception at two very different temporal and spatial scales. First, in crickets, I will present data on how high-speed molecular motor activity enhances hearing via the well-studied phenomenon of active amplification. Second, in spiders I will present data on how body posture, a slow macroscopic feature, which can barely be called ‘active’, can nonetheless modulate vibration perception. I hope these results will motivate a conversation about whether ‘active’ perception is an optional feature observed in some sensory systems, or something that is ultimately necessitated by both evolution and physics.

ePoster

The role of temporal coding in everyday hearing: evidence from deep neural networks

COSYNE 2022

ePoster

The role of temporal coding in everyday hearing: evidence from deep neural networks

COSYNE 2022

ePoster

Advancing optogenetic hearing restoration through cross-modal optimization

Anna Vavakou, Bettina Wollf, Kathrin Kusch, Thomas Mager, Patrick Ruther, Alexander Ecker, Tobias Moser

FENS Forum 2024

ePoster

Age-related hearing loss in older adults and cognition in older adults: Preliminary findings

Yi Ran Wang, Elodie Berthelier, Simon Cormier, Daniel Paromov, Karina Annita, Sven Joubert, François Champoux, Hugo Théoret

FENS Forum 2024

ePoster

Availability of information on artificial intelligence-enhanced hearing aids: A social media analysis

Joanie Ferland, Ariane Blouin, Matthieu J. Guitton, Andréanne Sharp

FENS Forum 2024

ePoster

Cognitive disturbances after hearing loss in adult rats are not accompanied by altered NeuN-, GABA-, and dopamine-expression in the central auditory pathway and prefrontal cortex

Marla Sofie Witte, Mariele Stenzel, Mesbah Alam, Jonas Jelinek, Joachim K. Krauss, Kerstin Schwabe, Marie Johne

FENS Forum 2024

ePoster

Development of the cochlear nucleus depending on the hearing experience of rats

Nicole Rosskothen-Kuhl, Malee Jarmila Zoe Sprenger, Heika Hildebrandt, Susan Arndt, Till Fabian Jakob

FENS Forum 2024

ePoster

Does spatial hearing with bionic ears change with jittered binaural stimuli?

Tim Fleiner, Emily Becker, Susan Arndt, Jan W. Schnupp, Nicole Rosskothen-Kuhl

FENS Forum 2024

ePoster

Evaluation of optogenetic gene therapy for hearing restoration in in vivo rodent models of sensorineural hearing loss

Victoria Hunniford, Maria Zerche, Bettina Wolf, Kathrin Kusch, Thomas Mager, Tobias Moser

FENS Forum 2024

ePoster

A loss of spiral ganglion neurons with an active ATOH1 enhancer alters hearing function

Kateryna Pysanenko, Mitra Tavakoli, Romana Bohuslavova, Josef Syka, Bernd Fritzsch, Gabriela Pavlinkova

FENS Forum 2024

ePoster

Multisession electric stimulation of the auditory cortex prevents cortical aging in an age-related hearing loss Wistar rat model

Inés S. Fernández del Campo, Antonio Fuente Juan, Iván Díaz, Ignacio Plaza, Miguel A. Merchán

FENS Forum 2024

ePoster

Sensitivity to envelope and pulse timing interaural time differences in prosthetic hearing

Shiyi Fang, Fei Peng, Bruno Castellaro, Muhammad Zeeshan, Nicole Rosskothen-Kuhl, Jan Schnupp

FENS Forum 2024

ePoster

Somatosensory cross-modal plasticity in hearing impaired subjects before and after cochlear implantation

Fatima Sofia Avila Cascajares, Boris Suchan, Christiane Völter

FENS Forum 2024

ePoster

Spatial hearing with bionic ears

Sarah Buchholz, Jan W Schnupp, Nicole Rosskothen-Kuhl

FENS Forum 2024