TopicNeuroscience
Content Overview
41Total items
26ePosters
15Seminars

Latest

SeminarNeuroscience

SSFN Webinar - Hearing Research

Anders Fridberger
Linköping University
Mar 28, 2025
SeminarNeuroscience

Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope

Alexis Hervais-Adelman
University of Geneva
May 21, 2024

Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.

SeminarNeuroscience

Gene therapy for hearing loss: where do we go from ear?

Christopher Cederroth
HNO at University Hospital Tübingen
Nov 2, 2023
SeminarNeuroscienceRecording

Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception

Ashley Schormans
BrainsCAN, Western University, Canada.
Sep 27, 2023

To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.

SeminarNeuroscience

How the brain uses experience to construct its multisensory capabilities

Barry E. Stein
Wake Forest School of Medicine
Apr 20, 2023

This talk will not be recorded

SeminarNeuroscienceRecording

Pitch and Time Interact in Auditory Perception

Jesse Pazdera
McMaster University, Canada
Oct 26, 2022

Research into pitch perception and time perception has typically treated the two as independent processes. However, previous studies of music and speech perception have suggested that pitch and timing information may be processed in an integrated manner, such that the pitch of an auditory stimulus can influence a person’s perception, expectation, and memory of its duration and tempo. Typically, higher-pitched sounds are perceived as faster and longer in duration than lower-pitched sounds with identical timing. We conducted a series of experiments to better understand the limits of this pitch-time integrality. Across several experiments, we tested whether the higher-equals-faster illusion generalizes across the broader frequency range of human hearing by asking participants to compare the tempo of a repeating tone played in one of six octaves to a metronomic standard. When participants heard tones from all six octaves, we consistently found an inverted U-shaped effect of the tone’s pitch height, such that perceived tempo peaked between A4 (440 Hz) and A5 (880 Hz) and decreased at lower and higher octaves. However, we found that the decrease in perceived tempo at extremely high octaves could be abolished by exposing participants to high-pitched tones only, suggesting that pitch-induced timing biases are context sensitive. We additionally tested how the timing of an auditory stimulus influences the perception of its pitch, using a pitch discrimination task in which probe tones occurred early, late, or on the beat within a rhythmic context. Probe timing strongly biased participants to rate later tones as lower in pitch than earlier tones. Together, these results suggest that pitch and time exert a bidirectional influence on one another, providing evidence for integrated processing of pitch and timing information in auditory perception. Identifying the mechanisms behind this pitch-time interaction will be critical for integrating current models of pitch and tempo processing.

SeminarNeuroscienceRecording

Designing the BEARS (Both Ears) Virtual Reality Training Package to Improve Spatial Hearing in Young People with Bilateral Cochlear Implant

Deborah Vickers
Clinical Neurosciences
Oct 11, 2022

Results: the main areas which were modified based on participatory feedback were the variety of immersive scenarios to cover a range of ages and interests, the number of levels of complexity to ensure small improvements were measured, the feedback and reward schemes to ensure positive reinforcement, and specific provision for participants with balance issues, who had difficulties when using head-mounted displays. The effectiveness of the finalised BEARS suite will be evaluated in a large-scale clinical trial. We have added in additional login options for other members of the family and based on patient feedback we have improved the accompanying reward schemes. Conclusions: Through participatory design we have developed a training package (BEARS) for young people with bilateral cochlear implants. The training games are appropriate for use by the study population and ultimately should lead to patients taking control of their own management and reducing the reliance upon outpatient-based rehabilitation programmes. Virtual reality training provides a more relevant and engaging approach to rehabilitation for young people.

SeminarNeuroscience

Hearing in an acoustically varied world

Kerry Walker
University of Oxford
Jan 25, 2022

In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.

SeminarNeuroscience

Looking and listening while moving

Tom Freeman
Cardiff University
Nov 17, 2021

In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.

SeminarNeuroscienceRecording

Direction selectivity in hearing: monaural phase sensitivity in octopus neurons

Philip Joris
KU Leuven
May 17, 2021

The processing of temporal sound features is fundamental to hearing, and the auditory system displays a plethora of specializations, at many levels, to enable such processing. Octopus neurons are the most extreme temporally-specialized cells in the auditory (and perhaps entire) brain, which make them intriguing but also difficult to study. Notwithstanding the scant physiological data, these neurons have been a favorite cell type of modeling studies which have proposed that octopus cells have critical roles in pitch and speech perception. We used a range of in vivo recording and labeling methods to examine the hypothesis that tonotopic ordering of cochlear afferents combines with dendritic delays to compensate for cochlear delay - which would explain the highly entrained responses of octopus cells to sound transients. Unexpectedly, the experiments revealed that these neurons have marked selectivity to the direction of fast frequency glides, which is tied in a surprising way to intrinsic membrane properties and subthreshold events. The data suggest that octopus cells have a role in temporal comparisons across frequency and may play a role in auditory scene analysis.

SeminarNeuroscienceRecording

Applications of Multisensory Facilitation of Learning

Aaron Seitz
University of California, Riverside
Apr 15, 2021

In this talk I’ll discuss translation of findings of multisensory facilitation of learning to cognitive training. I’ll first review some early findings of multisensory facilitation of learning and then discuss how we have been translating these basic science approaches into gamified training interventions to improve cognitive functions. I’ll touch on approaches to training vision, hearing and working memory that we are developing at the UCR Brain Game Center for Mental Fitness and Well-being. I look forward to discussing both the basic science but also the complexities of how to translate approaches from basic science into the more complex frameworks often used in interventions.

SeminarNeuroscienceRecording

Decoding the neural processing of speech

Tobias Reichenbach
Friedrich-Alexander-University
Mar 23, 2021

Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we process speech remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in different types of background noise. In particular, we develop statistical models to relate characteristic features of speech such as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations and linguistic structures. We also show that some of these measures allow to diagnose disorders of consciousness. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as in the diagnosis of brain disorders.

SeminarNeuroscienceRecording

Untitled Seminar

Marta Andres Miguel
UCL
Sep 23, 2020
SeminarNeuroscienceRecording

Growing up in Science

Andre Marques-Smith
CoMind
Jul 31, 2020

Have you ever wondered what your advisor struggled with as a graduate student? What they struggle with now? Growing up in science is a conversation series featuring personal narratives of becoming and being a scientist, with a focus on the unspoken challenges of a life in science. Growing up in Science was started in 2014 at New York University and is now worldwide. This article describes the origin and impact of the series. At a typical Growing up in Science event, one faculty member shares their life story, with a focus on struggles, failures, doubts, detours, and weaknesses. Common topics include dealing with expectations, impostor syndrome, procrastination, luck, rejection, conflicts with advisors, and work-life balance, life outside academia but these topics are always embedded in the speaker’s broader narrative. Cortex Club is hosting its first Growing up in science event! Join us on Friday the 31st July at 4pm for hearing the unofficial story of Dr André Marques-Smith, computational neuroscientist at CoMind (read his official and unofficial story at https://cortexclub.com/event/growing-up-in-science-oxford/). Details to join the talk will be circulated via the mailing list (to join our mailing list, follow the instructions at https://cortexclub.com/join-us/).

SeminarNeuroscienceRecording

The active modulation of sound and vibration perception

Natasha Mhatre
University of Western Ontario
Jun 17, 2020

The dominant view of perception right now is that information travels from the environment to the sensory system, then to the nervous systems which processes it to generate a percept and behaviour. Ongoing behaviour is thought to occur largely through simple iterations of this process. However, this linear view, where information flows only in one direction and the properties of the environment and the sensory system remain static and unaffected by behaviour, is slowly fading. Many of us are beginning to appreciate that perception is largely active, i.e. that information flows back and forth between the three systems modulating their respective properties. In other words, in the real world, the environment and sensorimotor loop is pretty much always closed. I study the loop; in particular I study how the reverse arm of the loop affects sound and vibration perception. I will present two examples of motor modulation of perception at two very different temporal and spatial scales. First, in crickets, I will present data on how high-speed molecular motor activity enhances hearing via the well-studied phenomenon of active amplification. Second, in spiders I will present data on how body posture, a slow macroscopic feature, which can barely be called ‘active’, can nonetheless modulate vibration perception. I hope these results will motivate a conversation about whether ‘active’ perception is an optional feature observed in some sensory systems, or something that is ultimately necessitated by both evolution and physics.

ePosterNeuroscience

The role of temporal coding in everyday hearing: evidence from deep neural networks

Mark Saddler,Josh McDermott

COSYNE 2022

ePosterNeuroscience

The role of temporal coding in everyday hearing: evidence from deep neural networks

Mark Saddler,Josh McDermott

COSYNE 2022

ePosterNeuroscience

Consequences of early-onset mild hearing loss on brain and behavior in rats

Joelle Jagersma, Sonja Pyott, Jocelien Olivier
ePosterNeuroscience

Distinct neurophysiological response mechanisms for non-verbal and verbal stimuli in Age Related Hearing Loss: a P300 study

Tatiana Marques, João Castelhano, Catarina Duarte, Inês Batista, João Rodrigues, Carla Moura, António Miguéis, Miguel Castelo-Branco
ePosterNeuroscience

Hidden hearing-loss and information transmission in the auditory midbrain

Juan A. Fuentes, Roland Schaette, David Mcalpine
ePosterNeuroscience

Human cortical auditory processing of naturalistic speech with simulated hearing loss: A data-driven fMRI approach

Arkan Al-Zubaidi, Jochem W. Rieger
ePosterNeuroscience

How do Interaural Time and Interaural Level Differences Interact in Spatial Hearing with Cochlear Implants?

Sarah Buchholz, Felix Kleinschroth, Heika Hildebrandt-Schönfeld, Theresa A. Preyer, Jan W. Schnupp, Nicole Rosskothen-Kuhl
ePosterNeuroscience

The Latency of Auditory Event Related Potential P300 prolonged in Unilateral Hearing Loss School Age Pupils in Mandarin Learning Environment

Hiu Che Foo, Chenwei Tang, Yuting Kao, Hsingmei Wu, Chelun Chang, Meiyao Wu, Yuchun Lo, Shih-Ming Weng
ePosterNeuroscience

Mismatch Negativity, a Neural marker of plasticity in Unilateral Hearing Loss patients

Mariam Alzaher, Strelnikov Kuzma, Pascal Barone, Mathieu Marx
ePosterNeuroscience

Novel MPDZ/MUPP1 transgenic models confirm Mpdz's role in social-psychological disorders associated with hearing and vestibular dysfunction

Maïté M. Moreau, Shri Vidhya Seshadri, Stephanie Mauriac, Morgane S. Audrain, Aline Mariguetto, Nathalie Sans, Mireille Montcouquiol
ePosterNeuroscience

Prospective and retrospective influences of hearing one’s voice in the sense of agency over speech

Ryu Ohata, Yuhei Uehigashi, Tomohisa Asai, Shu Imaizumi, Hiroshi Imamizu
ePosterNeuroscience

Purinergic Receptor Agonists Activated Ca2+ Signaling in the Deiters' cells in the Organ of Corti in Different Postnatal Developmental Stages from Prehearing to Matured

Eszter Berekméri, Louise Moysan, Ann-Kathrin Lutz, János Farkas, Ádám Fekete, László Köles, Beata Sperlagh, Tibor Zelles
ePosterNeuroscience

Silent movies synchronize secondary auditory cortices more in early deaf than hearing individuals

Maria Zimmermann, Rhodri Cusack, Marina Bedny, Marcin Szwed
ePosterNeuroscience

Sound Localization Tuning of the Medial Superior Olive in Mongolian Gerbils After Hearing Onset

Martijn C. Sierksma, Gerard Borst
ePosterNeuroscience

Advancing optogenetic hearing restoration through cross-modal optimization

Anna Vavakou, Bettina Wollf, Kathrin Kusch, Thomas Mager, Patrick Ruther, Alexander Ecker, Tobias Moser

FENS Forum 2024

ePosterNeuroscience

Age-related hearing loss in older adults and cognition in older adults: Preliminary findings

Yi Ran Wang, Elodie Berthelier, Simon Cormier, Daniel Paromov, Karina Annita, Sven Joubert, François Champoux, Hugo Théoret

FENS Forum 2024

ePosterNeuroscience

Availability of information on artificial intelligence-enhanced hearing aids: A social media analysis

Joanie Ferland, Ariane Blouin, Matthieu J. Guitton, Andréanne Sharp

FENS Forum 2024

ePosterNeuroscience

Cognitive disturbances after hearing loss in adult rats are not accompanied by altered NeuN-, GABA-, and dopamine-expression in the central auditory pathway and prefrontal cortex

Marla Sofie Witte, Mariele Stenzel, Mesbah Alam, Jonas Jelinek, Joachim K. Krauss, Kerstin Schwabe, Marie Johne

FENS Forum 2024

ePosterNeuroscience

Development of the cochlear nucleus depending on the hearing experience of rats

Nicole Rosskothen-Kuhl, Malee Jarmila Zoe Sprenger, Heika Hildebrandt, Susan Arndt, Till Fabian Jakob

FENS Forum 2024

ePosterNeuroscience

Does spatial hearing with bionic ears change with jittered binaural stimuli?

Tim Fleiner, Emily Becker, Susan Arndt, Jan W. Schnupp, Nicole Rosskothen-Kuhl

FENS Forum 2024

ePosterNeuroscience

Evaluation of optogenetic gene therapy for hearing restoration in in vivo rodent models of sensorineural hearing loss

Victoria Hunniford, Maria Zerche, Bettina Wolf, Kathrin Kusch, Thomas Mager, Tobias Moser

FENS Forum 2024

ePosterNeuroscience

A loss of spiral ganglion neurons with an active ATOH1 enhancer alters hearing function

Kateryna Pysanenko, Mitra Tavakoli, Romana Bohuslavova, Josef Syka, Bernd Fritzsch, Gabriela Pavlinkova

FENS Forum 2024

ePosterNeuroscience

Multisession electric stimulation of the auditory cortex prevents cortical aging in an age-related hearing loss Wistar rat model

Inés S. Fernández del Campo, Antonio Fuente Juan, Iván Díaz, Ignacio Plaza, Miguel A. Merchán

FENS Forum 2024

ePosterNeuroscience

Sensitivity to envelope and pulse timing interaural time differences in prosthetic hearing

Shiyi Fang, Fei Peng, Bruno Castellaro, Muhammad Zeeshan, Nicole Rosskothen-Kuhl, Jan Schnupp

FENS Forum 2024

ePosterNeuroscience

Somatosensory cross-modal plasticity in hearing impaired subjects before and after cochlear implantation

Fatima Sofia Avila Cascajares, Boris Suchan, Christiane Völter

FENS Forum 2024

ePosterNeuroscience

Spatial hearing with bionic ears

Sarah Buchholz, Jan W Schnupp, Nicole Rosskothen-Kuhl

FENS Forum 2024

hearing coverage

41 items

ePoster26
Seminar15

Share your knowledge

Know something about hearing? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how hearing research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.