← Back

Language

Topic spotlight
TopicWorld Wide

language comprehension

Discover seminars, jobs, and research tagged with language comprehension across World Wide.
6 curated items2 Positions2 Seminars2 ePosters
Updated 2 days ago
6 items · language comprehension
6 results
Position

Dr. John D. Griffiths, Dr. Mariya Toneva

University of Toronto, CAMH KCNI, Max Planck Institute for Software Systems
University of Toronto, Canada and Max Planck Institute for Software Systems, Saarbrücken, Germany
Dec 5, 2025

The PhD research topic will focus on understanding key mechanisms that enable specific cognitive functions in the brain, such as language comprehension, using a combination of computational neuroscience, machine learning, and experimental cognitive neuroscience techniques. The student will develop novel integrations of mechanistic physiological and generative AI-based theories of brain organization, and test these by designing, conducting, and analyzing experiments using advanced neuroimaging and neurostimulation technologies (EEG, fNIRS, TMS, MEG, fMRI, including mobile w/ VR/AR integration).

SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 28, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarNeuroscience

Towards an inclusive neurobiology of language

Esti Blanco Elorrieta
Department of Psychology, Harvard University, Cambridge, USA
Jan 27, 2022

Understanding how our brains process language is one of the fundamental issues in cognitive science. In order to reach such understanding, it is critical to cover the full spectrum of manners in which humans acquire and experience language. However, due to a myriad of socioeconomic factors, research has disproportionately focused on monolingual English speakers. In this talk, I present a series of studies that systematically target fundamental questions about bilingual language use across a range of conversational contexts, both in production and comprehension. The results lay the groundwork to propose a more inclusive theory of the neurobiology of language, with an architecture that assumes a common selection principle at each linguistic level and can account for attested features of both bilingual and monolingual speech in, but crucially also out of, experimental settings.

ePoster

Hierarchical and distributed systems of language comprehension and learning in the human brain

Megha Ghosh, Miles Mahon, Sophia Lowe-Hines, Adam Crandall, Qi Cheng, Andrew Ko, Kurt Weaver, Jeffrey Ojemann, Benjamin Grannan

COSYNE 2025

ePoster

Canine white matter pathways potentially related to human language comprehension

Mélina Cordeau, Isabel Levin, Mira Sinha, Erin Hecht

FENS Forum 2024