← Back

Cross Modal

Topic spotlight
TopicWorld Wide

Cross Modal

Discover seminars, jobs, and research tagged with Cross Modal across World Wide.
5 curated items5 Seminars
Updated about 3 years ago
5 items · Cross Modal
5 results
SeminarNeuroscienceRecording

A premotor amodal clock for rhythmic tapping

Hugo Merchant
National Autonomous University of Mexico
Nov 22, 2022

We recorded and analyzed the population activity of hundreds of neurons in the medial premotor areas (MPC) of rhesus monkeys performing an isochronous tapping task guided by brief flashing stimuli or auditory tones. The animals showed a strong bias towards visual metronomes, with rhythmic tapping that was more precise and accurate than for auditory metronomes. The population dynamics in state space as well as the corresponding neural sequences shared the following properties across modalities: the circular dynamics of the neural trajectories and the neural sequences formed a regenerating loop for every produced interval, producing a relative time representation; the trajectories converged in similar state space at tapping times while the moving bumps restart at this point, resetting the beat-based clock; the tempo of the synchronized tapping was encoded by a combination of amplitude modulation and temporal scaling in the neural trajectories. In addition, the modality induced a displacement in the neural trajectories in auditory and visual subspaces without greatly altering time keeping mechanism. These results suggest that the interaction between the amodal internal representation of pulse within MPC and a modality specific external input generates a neural rhythmic clock whose dynamics define the temporal execution of tapping using auditory and visual metronomes.

SeminarNeuroscienceRecording

Linking GWAS to pharmacological treatments for psychiatric disorders

Aurina Arnatkeviciute
Monash University
Aug 18, 2022

Genome-wide association studies (GWAS) have identified multiple disease-associated genetic variations across different psychiatric disorders raising the question of how these genetic variants relate to the corresponding pharmacological treatments. In this talk, I will outline our work investigating whether functional information from a range of open bioinformatics datasets such as protein interaction network (PPI), brain eQTL, and gene expression pattern across the brain can uncover the relationship between GWAS-identified genetic variation and the genes targeted by current drugs for psychiatric disorders. Focusing on four psychiatric disorders---ADHD, bipolar disorder, schizophrenia, and major depressive disorder---we assess relationships between the gene targets of drug treatments and GWAS hits and show that while incorporating information derived from functional bioinformatics data, such as the PPI network and spatial gene expression, can reveal links for bipolar disorder, the overall correspondence between treatment targets and GWAS-implicated genes in psychiatric disorders rarely exceeds null expectations. This relatively low degree of correspondence across modalities suggests that the genetic mechanisms driving the risk for psychiatric disorders may be distinct from the pathophysiological mechanisms used for targeting symptom manifestations through pharmacological treatments and that novel approaches for understanding and treating psychiatric disorders may be required.

SeminarNeuroscience

Language Representations in the Human Brain: A naturalistic approach

Fatma Deniz
TU Berlin & Berkeley
Apr 26, 2022

Natural language is strongly context-dependent and can be perceived through different sensory modalities. For example, humans can easily comprehend the meaning of complex narratives presented through auditory speech, written text, or visual images. To understand how complex language-related information is represented in the human brain there is a necessity to map the different linguistic and non-linguistic information perceived under different modalities across the cerebral cortex. To map this information to the brain, I suggest following a naturalistic approach and observing the human brain performing tasks in its naturalistic setting, designing quantitative models that transform real-world stimuli into specific hypothesis-related features, and building predictive models that can relate these features to brain responses. In my talk, I will present models of brain responses collected using functional magnetic resonance imaging while human participants listened to or read natural narrative stories. Using natural text and vector representations derived from natural language processing tools I will present how we can study language processing in the human brain across modalities, in different levels of temporal granularity, and across different languages.

SeminarNeuroscienceRecording

Reorganisation of the human visual system in the absence of light input

Holly Bridge
University of Oxford, UK
Mar 23, 2022
SeminarNeuroscienceRecording

Do you hear what I see: Auditory motion processing in blind individuals

Ione Fine
University of Washington
Oct 6, 2021

Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.