← Back

Language

Topic spotlight
TopicWorld Wide

language

Discover seminars, jobs, and research tagged with language across World Wide.
88 curated items60 Seminars14 Positions14 ePosters
Updated about 15 hours ago
88 items · language
88 results
Position

SISSA cognitive neuroscience PhD

International School for Advanced Studies (SISSA)
Trieste
Dec 5, 2025

Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for the Fall admission round are those funded by the "Fondo Sociale Europeo Plus", accessible through the first link above. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.

Position

N/A

Saarland University, the Max Planck Institute for Informatics, the Max Planck Institute for Software Systems, the CISPA Helmholtz Center for Information Security, and the German Research Center for Artificial Intelligence (DFKI)
Saarbrücken, Germany
Dec 5, 2025

The Research Training Group 2853 “Neuroexplicit Models of Language, Vision, and Action” is looking for 3 PhD students and 1 postdoc. Neuroexplicit models combine neural and human-interpretable (“explicit”) models in order to overcome the limitations that each model class has separately. They include neurosymbolic models, which combine neural and symbolic models, but also e.g. combinations of neural and physics-based models. In the RTG, we will improve the state of the art in natural language processing (“Language”), computer vision (“Vision”), and planning and reinforcement learning (“Action”) through the use of neuroexplicit models and investigate the cross-cutting design principles of effective neuroexplicit models (“Foundations”).

PositionPsychology

N/A

Complex Human Data Hub, University of Melbourne
University of Melbourne
Dec 5, 2025

We are seeking an outstanding researcher with expertise in computational or mathematical psychology to join the Complex Human Data Hub and contribute to the school’s research and teaching program. The CHDH has areas of strength in memory, perception, categorization, decision-making, language, cultural evolution, and social network analysis. We welcome applicants from all areas of mathematical psychology, computational cognitive science, computational behavioural science and computational social science and are especially interested in applicants who can build upon or complement our existing strengths. We particularly encourage applicants whose theoretical approaches and methodologies connect with social network processes and/or culture and cognition, or whose work links individual psychological processes to broader societal processes. We especially encourage women and other minorities to apply.

Position

Maxime Carrière

Freie Universität Berlin
Berlin, Germany
Dec 5, 2025

The ERC Advanced Grant “Material Constraints Enabling Human Cognition (MatCo)” at the Freie Universität Berlin aims to build network models of the human brain that mimic neurocognitive processes involved in language, communication and cognition. A main strategy is to use neural network models constrained by neuroanatomical and neurophysiological features of the human brain in order to explain aspects of human cognition. To this end, neural network simulations are performed and evaluated in neurophysiological and neurometabolic experiments. This neurocomputational and experimental research targets novel explanations of human language and cognition on the basis of neurobiological principles. In the MatCo project, 3 positions are currently available: 1 full time position for a Scientific Researcher at the postdoctoral level Fixed-term (until 30.9.2025), Salary Scale 13 TV-L FU ID: WiMi_MatCo100_08-2022, 2 part time positions (65%) for Scientific Researchers at the predoctoral level Fixed-term (until 30.9.2025), Salary Scale 13 TV-L FU ID: WiMi_MatCo65_08-2022

Position

N/A

Saarland University, the Max Planck Institute for Informatics, the Max Planck Institute for Software Systems, the CISPA Helmholtz Center for Information Security, and the German Research Center for Artificial Intelligence (DFKI)
Saarbrücken, Germany
Dec 5, 2025

The Research Training Group 2853 “Neuroexplicit Models of Language, Vision, and Action” is looking for 6 PhD students and 1 Postdoc. Neuroexplicit models combine neural and human-interpretable (“explicit”) models in order to overcome the limitations that each model class has separately. They include neurosymbolic models, which combine neural and symbolic models, but also e.g. combinations of neural and physics-based models. In the RTG, we will improve the state of the art in natural language processing (“Language”), computer vision (“Vision”), and planning and reinforcement learning (“Action”) through the use of neuroexplicit models and investigate the cross-cutting design principles of effective neuroexplicit models (“Foundations”).

Position

Alona Fyshe

Department of Psychology, University of Alberta, Alberta Machine Intelligence Institute (Amii)
Edmonton, University of Alberta
Dec 5, 2025

The Department of Psychology, University of Alberta, invites applications for a tenure-track position at the rank of Assistant Professor in Artificial Intelligence and Biological Cognition to commence with a start date as early as July 1, 2024. Exceptional candidates might be considered for hiring at the rank of Associate Professor. The position is part of a cluster hire in the intersection of AI/ML and other areas of research excellence within the University of Alberta that include Health, Energy, and Indigenous Initiatives in health and humanities, among others. The successful candidate will become an Amii Fellow, joining a highly collegial institute of world-class Artificial Intelligence and Machine Learning researchers, and will have access to Amii internal funding resources, administrative support, and a highly collaborative environment. The successful candidate will be nominated for a Canada CIFAR Artificial Intelligence (CCAI) Chair, by the Amii, which includes research funding for at least five years.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste, Italy
Dec 5, 2025

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department.

Position

Max Garagnani

Goldsmiths, University of London
Goldsmiths, University of London, Lewisham Way, New Cross, London SE14 6NW, UK
Dec 5, 2025

The MSc in Computational Cognitive Neuroscience at Goldsmiths, University of London is designed for students with a good degree in the biological/life sciences (psychology, neuroscience, biology, medicine, etc.) or physical sciences (computer science, mathematics, physics, engineering). The course provides a solid theoretical basis and experimental techniques in computational cognitive neuroscience. It includes the opportunity to apply knowledge in a practical research project, potentially in collaboration with industry partners. The programme covers fundamentals of cognitive neuroscience, computational modelling of biological neurons, neuronal circuits, higher brain functions, and includes the study of biologically constrained models of cognitive processes.

Position

Mathew Diamond

SISSA
Trieste, Italy
Dec 5, 2025

Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for current admission round are those funded by the 'Fondo Sociale Europeo Plus', accessible through the first link above.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste
Dec 5, 2025

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2025. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, neuronal foundations of perceptual experience and learning in various sensory modalities, motivation and intelligence, language, and neural computation. Our research is highly interdisciplinary; our approaches include behavioral, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline for the spring admission round is 20 March 2025 at 1pm CET. Please apply here, and see the admission procedure page for more information. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.

SeminarNeuroscienceRecording

Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation

Gesa Hartwigsen
University of Leipzig, Germany
May 19, 2025

Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.

SeminarNeuroscience

Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia

Alban Elias Voppel
McGill University
Apr 30, 2025
SeminarOpen SourceRecording

Towards open meta-research in neuroimaging

Kendra Oudyk
ORIGAMI - Neural data science - https://neurodatascience.github.io/
Dec 8, 2024

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 28, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarPsychology

How Generative AI is Revolutionizing the Software Developer Industry

Luca Di Grazia
Università della Svizzera Italiana
Sep 30, 2024

Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.

SeminarArtificial IntelligenceRecording

Llama 3.1 Paper: The Llama Family of Models

Vibhu Sapra
Jul 28, 2024

Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.

SeminarOpen Source

Open source FPGA tools for building research devices

Edmund Humenberger
CEO @ Symbiotic EDA
Jun 24, 2024

Edmund will present why to use FPGAs when building scientific instruments, when and why to use open source FPGA tools, the history of their development, their development status, currently supported FPGA families and functions, current developments in design languages and tools, the community, freely available design blocks, and possible future developments.

SeminarNeuroscience

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Nicholas Blauch
Jun 6, 2024

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarArtificial IntelligenceRecording

Improving Language Understanding by Generative Pre Training

Amgad Hasan
Apr 22, 2024

Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).

SeminarArtificial IntelligenceRecording

A Comprehensive Overview of Large Language Models

Ivan Leo
Mar 14, 2024

Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks and beyond. This success of LLMs has led to a large influx of research contributions in this direction. These works encompass diverse topics such as architectural innovations, better training strategies, context length improvements, fine-tuning, multi-modal LLMs, robotics, datasets, benchmarking, efficiency, and more. With the rapid development of techniques and regular breakthroughs in LLM research, it has become considerably challenging to perceive the bigger picture of the advances in this direction. Considering the rapidly emerging plethora of literature on LLMs, it is imperative that the research community is able to benefit from a concise yet comprehensive overview of the recent developments in this field. This article provides an overview of the existing literature on a broad range of LLM-related concepts. Our self-contained comprehensive overview of LLMs discusses relevant background concepts along with covering the advanced topics at the frontier of research in LLMs. This review article is intended to not only provide a systematic survey but also a quick comprehensive reference for the researchers and practitioners to draw insights from extensive informative summaries of the existing works to advance the LLM research.

SeminarNeuroscience

Dyslexia, Rhythm, Language and the Developing Brain

Usha Goswami
University of Cambridge, UK
Mar 4, 2024
SeminarNeuroscience

Dyslexia, Rhythm, Language and the Developing Brain

Usha Goswami CBE
University of Cambridge
Feb 21, 2024

Recent insights from auditory neuroscience provide a new perspective on how the brain encodes speech. Using these recent insights, I will provide an overview of key factors underpinning individual differences in children’s development of language and phonology, providing a context for exploring atypical reading development (dyslexia). Children with dyslexia are relatively insensitive to acoustic cues related to speech rhythm patterns. This lack of rhythmic sensitivity is related to the atypical neural encoding of rhythm patterns in speech by the brain. I will describe our recent data from infants as well as children, demonstrating developmental continuity in the key neural variables.

SeminarNeuroscience

Deep language models as a cognitive model for natural language processing in the human brain

Uri Hasson
Princeton University
Dec 6, 2023
SeminarNeuroscience

Modeling Primate Vision (and Language)

Martin Schrimpf
NeuroX, EPFL
Dec 5, 2023
SeminarCognition

Great ape interaction: Ladyginian but not Gricean

Thom Scott-Phillips
Institute for Logic, Cognition, Language and Information
Nov 20, 2023

Non-human great apes inform one another in ways that can seem very humanlike. Especially in the gestural domain, their behavior exhibits many similarities with human communication, meeting widely used empirical criteria for intentionality. At the same time, there remain some manifest differences. How to account for these similarities and differences in a unified way remains a major challenge. This presentation will summarise the arguments developed in a recent paper with Christophe Heintz. We make a key distinction between the expression of intentions (Ladyginian) and the expression of specifically informative intentions (Gricean), and we situate this distinction within a ‘special case of’ framework for classifying different modes of attention manipulation. The paper also argues that the attested tendencies of great ape interaction—for instance, to be dyadic rather than triadic, to be about the here-and-now rather than ‘displaced’—are products of its Ladyginian but not Gricean character. I will reinterpret video footage of great ape gesture as Ladyginian but not Gricean, and distinguish several varieties of meaning that are continuous with one another. We conclude that the evolutionary origins of linguistic meaning lie in gradual changes in not communication systems as such, but rather in social cognition, and specifically in what modes of attention manipulation are enabled by a species’ cognitive phenotype: first Ladyginian and in turn Gricean. The second of these shifts rendered humans, and only humans, ‘language ready’.

SeminarNeuroscience

Dyslexias in words and numbers

Naama Friedmann
Tel Aviv University
Nov 13, 2023
SeminarPsychology

Enhancing Qualitative Coding with Large Language Models: Potential and Challenges

Kim Uittenhove & Olivier Mucchiut
AFC Lab / University of Lausanne
Oct 15, 2023

Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.

SeminarNeuroscienceRecording

Consciousness in the age of mechanical minds

Robert Pepperell
Cardiff Metropolitan University
May 30, 2023

We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.

SeminarPsychology

Diagnosing dementia using Fastball neurocognitive assessment

George Stothart
University of Bath
Apr 18, 2023

Fastball is a novel, fast, passive biomarker of cognitive function, that uses cheap, scalable electroencephalography (EEG) technology. It is sensitive to early dementia; language, education, effort and anxiety independent and can be used in any setting including patients’ homes. It can capture a range of cognitive functions including semantic memory, recognition memory, attention and visual function. We have shown that Fastball is sensitive to cognitive dysfunction in Alzheimer’s disease and Mild Cognitive Impairment, with data collected in patients’ homes using low-cost portable EEG. We are now preparing for significant scale-up and the validation of Fastball in primary and secondary care.

SeminarNeuroscience

Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach

Valentina Borghesani
University of Geneva, Switzerland & NCCR Evolving Language
Mar 15, 2023

The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).

SeminarNeuroscienceRecording

Verb metaphors are processed as analogies

Daniel King
Northwestern University
Mar 8, 2023

Metaphor is a pervasive phenomenon in language and cognition. To date, the vast majority of psycholinguistic research on metaphor has focused on noun-noun metaphors of the form An X is a Y (e.g., My job is a jail). Yet there is evidence that verb metaphor (e.g., I sailed through my exams) is more common. Despite this, comparatively little work has examined how verb metaphors are processed. In this talk, I will propose a novel account for verb metaphor comprehension: verb metaphors are understood in the same way that analogies are—as comparisons processed via structure-mapping. I will discuss the predictions that arise from applying the analogical framework to verb metaphor and present a series of experiments showing that verb metaphoric extension is consistent with those predictions.

SeminarNeuroscienceRecording

Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being

Micole Spitale
Department of Computer Science and Technology, University of Cambridge
Feb 6, 2023

Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.

SeminarNeuroscienceRecording

Applying Structural Alignment theory to Early Verb Learning

Jane Childers
Trinity University
Feb 2, 2023

Learning verbs is difficult and critical to learning one's native language. Children appear to benefit from seeing multiple events and comparing them to each other, and structural alignment theory provides a good theoretical framework to guide research into how preschool children may be comparing events as they learn new verbs. The talk will include 6 studies of early verb learning that make use of eye-tracking procedures as well as other behavioral (pointing) procedures, and that test key predictions from SA theory including the prediction that seeing similar examples before more varied examples helps observers learn how to compare (progressive alignment) and the prediction that when events have very low alignability with other events, that is one cue that the events should be ignored. Whether or how statistical learning may also be at work will be considered.

SeminarNeuroscience

Bridging clinical and cognitive neuroscience together to investigate semantics, above and beyond language

Valentina Borghesani
University of Geneva, Switzerland & NCCR Evolving Language
Jan 19, 2023

We will explore how neuropsychology can be leveraged to directly test cognitive neuroscience theories using the case of frontotemporal dementias affecting the language network. Specifically, we will focus on pathological, neuroimaging, and cognitive data from primary progressive aphasia. We will see how they can help us investigate the reading network, semantic knowledge organisation, and grammatical categories processing. Time permitting, the end of the talk will cover the temporal dynamics of semantic dimensions recovery and the role played by the task.

SeminarNeuroscienceRecording

Geometry of concept learning

Haim Sompolinsky
The Hebrew University of Jerusalem and Harvard University
Jan 3, 2023

Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.

SeminarNeuroscienceRecording

Modelling metaphor comprehension as a form of analogizing

Gerard Steen
University of Amsterdam
Nov 30, 2022

What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.

SeminarNeuroscienceRecording

Do large language models solve verbal analogies like children do?

Claire Stevenson
University of Amsterdam
Nov 16, 2022

Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.

SeminarNeuroscience

Exploring emotion in the expression of ape gesture

Cat Hobaiter
University of St Andrews
Nov 7, 2022

Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins.  All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscience

Studying genetic overlap between ASD risk and related traits: From polygenic pleiotropy to disorder-specific profiles

Beate St Pourcain
Max Planck Institute for Psycholinguistics
Jun 14, 2022
SeminarNeuroscienceRecording

The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)

Elizabeth Jefferies
Department of Psychology, University of York, UK
May 24, 2022

Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.

SeminarNeuroscienceRecording

Children’s inference of verb meanings: Inductive, analogical and abductive inference

Mutsumi Imai
Keio University
May 18, 2022

Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.

SeminarNeuroscience

It’s not over our heads: Why human language needs a body

Michał B. Paradowski
Institute of Applied Linguistics, University of Warsaw
May 8, 2022

n the ‘orthodox’ view, cognition has been seen as manipulation of symbolic, mental representations, separate from the body. This dualist Cartesian approach characterised much of twentieth-century thought and is still taken for granted by many people today. Language, too, has for a long time been treated across scientific domains as a system operating largely independently from perception, action, and the body (articulatory-perceptual organs notwithstanding). This could lead one into believing that to emulate linguistic behaviour, it would suffice to develop ‘software’ operating on abstract representations that would work on any computational machine. Yet the brain is not the sole problem-solving resource we have at our disposal. The disembodied picture is inaccurate for numerous reasons, which will be presented addressing the issue of the indissoluble link between cognition, language, body, and environment in understanding and learning. The talk will conclude with implications and suggestions for pedagogy, relevant for disciplines as diverse as instruction in language, mathematics, and sports.

SeminarNeuroscience

Language Representations in the Human Brain: A naturalistic approach

Fatma Deniz
TU Berlin & Berkeley
Apr 26, 2022

Natural language is strongly context-dependent and can be perceived through different sensory modalities. For example, humans can easily comprehend the meaning of complex narratives presented through auditory speech, written text, or visual images. To understand how complex language-related information is represented in the human brain there is a necessity to map the different linguistic and non-linguistic information perceived under different modalities across the cerebral cortex. To map this information to the brain, I suggest following a naturalistic approach and observing the human brain performing tasks in its naturalistic setting, designing quantitative models that transform real-world stimuli into specific hypothesis-related features, and building predictive models that can relate these features to brain responses. In my talk, I will present models of brain responses collected using functional magnetic resonance imaging while human participants listened to or read natural narrative stories. Using natural text and vector representations derived from natural language processing tools I will present how we can study language processing in the human brain across modalities, in different levels of temporal granularity, and across different languages.

SeminarNeuroscienceRecording

The logopenic variant of primary progressive aphasia (lvPPA): language, cognitive, neuroradiological issues

Robert Rusina and Zsolt Cséfalvay
Thomayer University Hospital Videnska, Prague, Czech Republic; Comenius University, Bratislava, Slovakia
Apr 4, 2022
SeminarNeuroscience

An executive control approach to language production

Etienne Koechlin
École Normale Supérieure and INSERM, Paris, France
Apr 4, 2022

Language production is a form of behavior and as such involves executive control and the prefrontal function. The cognitive architecture of prefrontal executive function thus certainly plays an important role in shaping language production. In this talk, I will review the main features of the prefrontal executive function we have uncovered during the last two decades and I will discuss how these features may help understanding language production.

SeminarCognitionRecording

Understanding Natural Language: Insights From Cognitive Science, Cognitive Neuroscience, and Artificial Intelligence

James McClelland
Stanford University
Mar 16, 2022
SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Feb 28, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscience

Dynamic structural neuroplasticity in the bilingual brain

Christos Pliatsikas
University of Reading, UK
Feb 28, 2022

Research on the effects of bilingualism on the structure of the brain has so far yielded variable patterns. Although it cannot be disputed that learning and using additional languages restructures the brain, the reported effects vary considerably, including both increases and reductions in grey matter volume and white matter diffusivity. This presentation reviews the available evidence and compares it to patterns from other domains of skill acquisition, culminating in the Dynamic Restructuring Model, a theory which synthesises the available evidence from the perspective of experience-based neuroplasticity. New corroborating evidence is also presented from healthy young and older bilinguals, and the presentation concludes with the implications of these effects for the ageing brain.

SeminarNeuroscienceRecording

What is Cognitive Neuropsychology Good For? An Unauthorized Biography

Alfonso Caramazza
Cognitive Neuropsychology Laboratory, Harvard University, USA; Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
Feb 22, 2022

Abstract: There is no doubt that the study of brain damaged individuals has contributed greatly to our understanding of the mind/brain. Within this broad approach, cognitive neuropsychology accentuates the cognitive dimension: it investigates the structure and organization of perceptual, motor, cognitive, and language systems – prerequisites for understanding the functional organization of the brain – through the analysis of their dysfunction following brain damage. Significant insights have come specifically from this paradigm. But progress has been slow and enthusiasm for this approach has waned somewhat in recent years, and the use of existing findings to constrain new theories has also waned. What explains the current diminished status of cognitive neuropsychology? One reason may be failure to calibrate expectations about the effective contribution of different subfields of the study of the mind/brain as these are determined by their natural peculiarities – such factors as the types of available observations and their complexity, opportunity of access to such observations, the possibility of controlled experimentation, and the like. Here, I also explore the merits and limitations of cognitive neuropsychology, with particular focus on the role of intellectual, pragmatic, and societal factors that determine scientific practice within the broader domains of cognitive science/neuroscience. I conclude on an optimistic note about the continuing unique importance of cognitive neuropsychology: although limited to the study of experiments of nature, it offers a privileged window into significant aspects of the mind/brain that are not easily accessible through other approaches. Biography: Alfonso Caramazza's research has focussed extensively on how words and their meanings are represented in the brain. His early pioneering studies helped to reformulate our thinking about Broca's aphasia (not limited to production) and formalised the logic of patient-based neuropsychology. More recently he has been instrumental in reconsidering popular claims about embodied cognition.

SeminarNeuroscience

Electrophysiological investigations of natural speech and language processing

Edmund Lalor
University of Rochester, USA
Feb 13, 2022
SeminarNeuroscienceRecording

How bilingualism modulates the neural mechanisms of selective attention

Mirjana Bozic
Department of Psychology, University of Cambridge
Jan 31, 2022

Learning and using multiple languages places considerable demands on our cognitive system, and has been shown to modulate the mechanisms of selective attention in both children and adults. Yet the nature of these adaptive changes is still not entirely clear. One possibility is that bilingualism boosts the capacity for selective attention; another is that it leads to a different distribution of this finite resource, aimed at supporting optimal performance under the increased processing demands. I will present a series of studies investigating the nature of modifications of selective attention in bilingualism. Using behavioural and neuroimaging techniques, our data confirm that bilingualism modifies the neural mechanisms of selective attention even in the absence of behavioural differences between monolinguals and bilinguals. They further suggest that, instead of enhanced attentional capacity, these neuroadaptive modifications appear to reflect its redistribution, arguably aimed at economising the available resources to support optimal behavioural performance.

SeminarNeuroscience

The pervasive role of visuospatial coding

Edward Silson
School of Philosophy, Psychology & Language Sciences, University of Edinburgh, UK
Jan 31, 2022

Historically, retinotopic organisation (the spatial mapping of the retina across the cortical surface) was considered the purview of early regions of visual cortex (V1-V4) only and that anterior, more cognitively involved regions abstracted this information away. The contemporary view is quite different. Here, with Advancing technologies and analysis methods, we see that retinotopic information is not simply thrown away by these regions but rather is maintained to the potential benefit of our broader cognition. This maintenance of visuospatial coding extends not only through visual cortex, but is present in parietal, frontal, medial and subcortical structures involved with coordinating-movements, mind-wandering and even memory. In this talk, I will outline some of the key empirical findings from my own work and the work of others that shaped this contemporary perspective.

SeminarNeuroscience

Towards an inclusive neurobiology of language

Esti Blanco Elorrieta
Department of Psychology, Harvard University, Cambridge, USA
Jan 27, 2022

Understanding how our brains process language is one of the fundamental issues in cognitive science. In order to reach such understanding, it is critical to cover the full spectrum of manners in which humans acquire and experience language. However, due to a myriad of socioeconomic factors, research has disproportionately focused on monolingual English speakers. In this talk, I present a series of studies that systematically target fundamental questions about bilingual language use across a range of conversational contexts, both in production and comprehension. The results lay the groundwork to propose a more inclusive theory of the neurobiology of language, with an architecture that assumes a common selection principle at each linguistic level and can account for attested features of both bilingual and monolingual speech in, but crucially also out of, experimental settings.

SeminarNeuroscienceRecording

NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks

Athena Akrami
University College London
Dec 1, 2021

The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.

SeminarNeuroscienceRecording

NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex

Aria Wang
Carnegie Mellon University
Nov 30, 2021

To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.

SeminarCognitionRecording

Language, Cognition, Biology

Cedric Boeckx
Catalan Institute for Advanced Studies (ICREA)
Nov 15, 2021
SeminarNeuroscienceRecording

Perceptual and neural basis of sound-symbolic crossmodal correspondences

Krish Sathian
Penn State Health Milton S. Hershey Medical Center, Pennsylvania State University
Oct 27, 2021
ePoster

Probing right-hemispheric neuronal representations in the language network of an individual with aphasia

Felix Waitzmann, Laura Schiffl, Lisa Held, Arthur Wagner, Bernhard Meyer, Jens Gempt, Simon Jacob, Julijana Gjorgjieva

Bernstein Conference 2024

ePoster

Alignment of ANN Language Models with Humans After a Developmentally Realistic Amount of Training

Eghbal Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, Evelina Fedorenko

COSYNE 2023

ePoster

“Attentional fingerprints” in conceptual space: Reliable, individuating patterns of visual attention revealed using natural language modeling

Caroline Robertson, Katherine Packard, Amanda Haskins

COSYNE 2023

ePoster

Language emergence in reinforcement learning agents performing navigational tasks

Tobias Wieczorek, Maximilian Eggl, Tatjana Tchumatchenko, Carlos Wert Carvajal

COSYNE 2023

ePoster

Hierarchical and distributed systems of language comprehension and learning in the human brain

Megha Ghosh, Miles Mahon, Sophia Lowe-Hines, Adam Crandall, Qi Cheng, Andrew Ko, Kurt Weaver, Jeffrey Ojemann, Benjamin Grannan

COSYNE 2025

ePoster

Simulated Language Acquisition in a Biologically Realistic Model of the Brain

Daniel Mitropolsky, Christos H. Papadimitriou

COSYNE 2025

ePoster

Analyzing animal behavior with domain-adapted vision-language models

Valentin Gabeff, Sepideh Mamooler, Andy Bonnetto, Devis Tuia, Alexander Mathis

FENS Forum 2024

ePoster

Canine white matter pathways potentially related to human language comprehension

Mélina Cordeau, Isabel Levin, Mira Sinha, Erin Hecht

FENS Forum 2024

ePoster

Insights into semantic language development in children with and without autism using neurophysiological and neuroimaging methods

Kathryn Toffolo, Edward Freedman, John Foxe

FENS Forum 2024

ePoster

Language laterality indices in epilepsy patients: A comparative analysis of four pipelines

Andrea Ellsay, Karla Batista Garcia-Ramo, Lysa Boisse Lomax, Garima Shukla, Donald Brien, Ada Mullett, Madeline Hopkins, Ron Levy, Gavin Winston

FENS Forum 2024

ePoster

The language of space: Where is this and where is that?

Umberto Quartetti, Giuditta Gambino, Filippo Brighina, Danila Di Majo, Giulio Musotto, Giuseppe Ferraro, Pierangelo Sardo, Giuseppe Giglia

FENS Forum 2024

ePoster

Resting-state brain networks in relation to declarative/procedural memory and multilingual language experience

Sevil Maghsadhagh, Olga Kepinska, Irene Balboni, Alessandra Rampinini, Sayako F. Earle, Michael T. Ullman, Raphael Berthelé, Narly Golestani

FENS Forum 2024

ePoster

Genetic correlates of intra- and interhemispheric resting state functional language connectivity

Jitse Amelink

Neuromatch 5

ePoster

Genetic correlates of intra- and interhemispheric resting state functional language connectivity

Jitse Amelink

Neuromatch 5