Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
← Back

Llms Human Language Processing

Back to SeminarsBack
SeminarPast EventNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King

Prof

Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure

Schedule
Thursday, November 28, 2024

Showing your local timezone

Schedule

Thursday, November 28, 2024

2:00 PM Europe/Vienna

Host: Brain Prize Webinar Series 2024

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

Access Seminar

Event Information

Format

Past Seminar

Recording

Not available

Host

Brain Prize Webinar Series 2024

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

World Wide map

Abstract

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

Topics

ECoGMEGauditory cortical responsesbrain tuningfMRIhuman language processinglarge language modelsperturbation analysessemantic representations

About the Speaker

Maryia Toneva, Ariel Goldstein, Jean-Remi King

Prof

Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure

Contact & Resources

No additional contact information available

Related Seminars

Seminar64% match - Relevant

Continuous guidance of human goal-directed movements

neuro

Dec 9, 2024
VU University Amsterdam
Seminar64% match - Relevant

Rett syndrome, MECP2 and therapeutic strategies

neuro

The development of the iPS cell technology has revolutionized our ability to study development and diseases in defined in vitro cell culture systems. The talk will focus on Rett Syndrome and discuss t

Dec 10, 2024
Whitehead Institute for Biomedical Research and Department of Biology, MIT, Cambridge, USA
Seminar64% match - Relevant

Genetic and epigenetic underpinnings of neurodegenerative disorders

neuro

Pluripotent cells, including embryonic stem (ES) and induced pluripotent stem (iPS) cells, are used to investigate the genetic and epigenetic underpinnings of human diseases such as Parkinson’s, Alzhe

Dec 10, 2024
MIT Department of Biology
World Wide calendar

World Wide highlights

December 2025 • Syncing the latest schedule.

View full calendar
Awaiting featured picks
Month at a glance

Upcoming highlights