Auditory Modalities
auditory modalities
The Effects of Movement Parameters on Time Perception
Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Representations of people in the brain
Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).