Language
language comprehension
Fatma Deniz
We are looking for a highly motivated researcher to join our group in interdisciplinary projects that focus on the development of computational models to understand how linguistic information is represented in the human brain during language comprehension. Computational encoding models in combination with deep learning-based machine learning techniques will be developed, compared, and applied to identify linguistic representations in the brain across languages.
Dr. John D. Griffiths, Dr. Mariya Toneva
The PhD research topic will focus on understanding key mechanisms that enable specific cognitive functions in the brain, such as language comprehension, using a combination of computational neuroscience, machine learning, and experimental cognitive neuroscience techniques. The student will develop novel integrations of mechanistic physiological and generative AI-based theories of brain organization, and test these by designing, conducting, and analyzing experiments using advanced neuroimaging and neurostimulation technologies (EEG, fNIRS, TMS, MEG, fMRI, including mobile w/ VR/AR integration).
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Towards an inclusive neurobiology of language
Understanding how our brains process language is one of the fundamental issues in cognitive science. In order to reach such understanding, it is critical to cover the full spectrum of manners in which humans acquire and experience language. However, due to a myriad of socioeconomic factors, research has disproportionately focused on monolingual English speakers. In this talk, I present a series of studies that systematically target fundamental questions about bilingual language use across a range of conversational contexts, both in production and comprehension. The results lay the groundwork to propose a more inclusive theory of the neurobiology of language, with an architecture that assumes a common selection principle at each linguistic level and can account for attested features of both bilingual and monolingual speech in, but crucially also out of, experimental settings.
Hierarchical and distributed systems of language comprehension and learning in the human brain
COSYNE 2025
Canine white matter pathways potentially related to human language comprehension
FENS Forum 2024