← Back

Voices

Topic spotlight
TopicWorld Wide

voices

Discover seminars, jobs, and research tagged with voices across World Wide.
7 curated items7 Seminars
Updated over 2 years ago
7 items · voices
7 results
SeminarPsychology

Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum

Léon Franzen
University of Luebeck
Apr 25, 2023

Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.

SeminarPsychology

The speaker identification ability of blind and sighted listeners

Almut Braun
Bundeskriminalamt, Wiesbaden
Feb 21, 2023

Previous studies have shown that blind individuals outperform sighted controls in a variety of auditory tasks; however, only few studies have investigated blind listeners’ speaker identification abilities. In addition, existing studies in the area show conflicting results. The presented empirical investigation with 153 blind (74 of them congenitally blind) and 153 sighted listeners is the first of its kind and scale in which long-term memory effects of blind listeners’ speaker identification abilities are examined. For the empirical investigation, all listeners were evenly assigned to one of nine subgroups (3 x 3 design) in order to investigate the influence of two parameters with three levels, respectively, on blind and sighted listeners’ speaker identification performance. The parameters were a) time interval; i.e. a time interval of 1, 3 or 6 weeks between the first exposure to the voice to be recognised (familiarisation) and the speaker identification task (voice lineup); and b) signal quality; i.e. voice recordings were presented in either studio-quality, mobile phone-quality or as recordings of whispered speech. Half of the presented voice lineups were target-present lineups in which the previously heard target voice was included. The other half consisted of target-absent lineups which contained solely distractor voices. Blind individuals outperformed sighted listeners only under studio quality conditions. Furthermore, for blind and sighted listeners no significant performance differences were found with regard to the three investigated time intervals of 1, 3 and 6 weeks. Blind as well as sighted listeners were significantly better at picking the target voice from target-present lineups than at indicating that the target voice was absent in target-absent lineups. Within the blind group, no significant correlations were found between identification performance and onset or duration of blindness. Implications for the field of forensic phonetics are discussed.

SeminarNeuroscienceRecording

Representations of people in the brain

Lucia Garrido
City, University of London
Nov 21, 2022

Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).

SeminarPsychology

Developing a test to assess the ability of Zurich’s police cadets to discriminate, learn and recognize voices

Andrea Fröhlich
Zurich Forensic Science Institute
Feb 2, 2022

The goal of this pilot study is to develop a test through which people with extraordinary voice recognition and discrimination skills can be found (for forensic purposes). Since interest in this field has emerged, three studies have been published with the goal of finding people with potential super-recognition skills in voice processing. One of them is a discrimination test and two are recognition tests, but neither combines the two test scenarios and their test designs cannot be directly compared to a casework scenario in forensics phonetics. The pilot study at hand attempts to bridge this gap and analyses if the skills of voice discrimination and recognition correlate. The study is guided by a practical, forensic application, which further complicates the process of creating a viable test. The participants for the pilot consist of different classes of police cadets, which means the test can be redone and adjusted over time.

SeminarNeuroscienceRecording

Multisensory speech perception

Michael Beauchamp
University of Pennsylvania
Sep 15, 2021
SeminarPsychology

The Jena Voice Learning and Memory Test (JVLMT)

Romi Zäske
University of Jena
May 26, 2021

The ability to recognize someone’s voice spans a broad spectrum with phonagnosia on the low end and super recognition at the high end. Yet there is no standardized test to measure the individual ability to learn and recognize newly-learnt voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 20 min-test based on item response theory and applicable across different languages. The JVLMT consists of three phases in which participants are familiarized with eight speakers in two stages and then perform a three-alternative forced choice recognition task, using pseudo sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with different levels of difficulty. Test scores are based on 22 Rasch-conform items. Items were selected and validated in online studies based on 232 and 454 participants, respectively. Mean accuracy is 0.51 with an SD of .18. The JVLMT showed high and moderate correlations with convergent validation tests (Bangor Voice Matching Test; Glasgow Voice Memory Test) and a weak correlation with a discriminant validation test (Digit Span). Empirical (marginal) reliability is 0.66. Four participants with super recognition (at least 2 SDs above the mean) and 7 participants with phonagnosia (at least 2 SDs below the mean) were identified. The JVLMT is a promising screen too for voice recognition abilities in a scientific and neuropsychological context.

SeminarNeuroscience

From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories

Eleanor Maguire
Wellcome Centre for Human Neuroimaging at UCL
Nov 30, 2020

Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.