← Back

Language

Topic spotlight
TopicNeuro

language

Discover seminars, jobs, and research tagged with language across Neuro.
50 curated items50 Seminars
Updated 7 months ago
50 items · language

Latest

50 results
SeminarNeuroscienceRecording

Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation

Gesa Hartwigsen
University of Leipzig, Germany
May 20, 2025

Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.

SeminarNeuroscience

Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia

Alban Elias Voppel
McGill University
May 1, 2025
SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 29, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarNeuroscience

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Nicholas Blauch
Jun 7, 2024

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Dyslexia, Rhythm, Language and the Developing Brain

Usha Goswami
University of Cambridge, UK
Mar 5, 2024
SeminarNeuroscience

Dyslexia, Rhythm, Language and the Developing Brain

Usha Goswami CBE
University of Cambridge
Feb 22, 2024

Recent insights from auditory neuroscience provide a new perspective on how the brain encodes speech. Using these recent insights, I will provide an overview of key factors underpinning individual differences in children’s development of language and phonology, providing a context for exploring atypical reading development (dyslexia). Children with dyslexia are relatively insensitive to acoustic cues related to speech rhythm patterns. This lack of rhythmic sensitivity is related to the atypical neural encoding of rhythm patterns in speech by the brain. I will describe our recent data from infants as well as children, demonstrating developmental continuity in the key neural variables.

SeminarNeuroscience

Deep language models as a cognitive model for natural language processing in the human brain

Uri Hasson
Princeton University
Dec 7, 2023
SeminarNeuroscience

Modeling Primate Vision (and Language)

Martin Schrimpf
NeuroX, EPFL
Dec 6, 2023
SeminarNeuroscience

Dyslexias in words and numbers

Naama Friedmann
Tel Aviv University
Nov 14, 2023
SeminarNeuroscienceRecording

Consciousness in the age of mechanical minds

Robert Pepperell
Cardiff Metropolitan University
Jun 1, 2023

We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.

SeminarNeuroscience

Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach

Valentina Borghesani
University of Geneva, Switzerland & NCCR Evolving Language
Mar 16, 2023

The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).

SeminarNeuroscienceRecording

Verb metaphors are processed as analogies

Daniel King
Northwestern University
Mar 9, 2023

Metaphor is a pervasive phenomenon in language and cognition. To date, the vast majority of psycholinguistic research on metaphor has focused on noun-noun metaphors of the form An X is a Y (e.g., My job is a jail). Yet there is evidence that verb metaphor (e.g., I sailed through my exams) is more common. Despite this, comparatively little work has examined how verb metaphors are processed. In this talk, I will propose a novel account for verb metaphor comprehension: verb metaphors are understood in the same way that analogies are—as comparisons processed via structure-mapping. I will discuss the predictions that arise from applying the analogical framework to verb metaphor and present a series of experiments showing that verb metaphoric extension is consistent with those predictions.

SeminarNeuroscienceRecording

Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being

Micole Spitale
Department of Computer Science and Technology, University of Cambridge
Feb 7, 2023

Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.

SeminarNeuroscienceRecording

Applying Structural Alignment theory to Early Verb Learning

Jane Childers
Trinity University
Feb 2, 2023

Learning verbs is difficult and critical to learning one's native language. Children appear to benefit from seeing multiple events and comparing them to each other, and structural alignment theory provides a good theoretical framework to guide research into how preschool children may be comparing events as they learn new verbs. The talk will include 6 studies of early verb learning that make use of eye-tracking procedures as well as other behavioral (pointing) procedures, and that test key predictions from SA theory including the prediction that seeing similar examples before more varied examples helps observers learn how to compare (progressive alignment) and the prediction that when events have very low alignability with other events, that is one cue that the events should be ignored. Whether or how statistical learning may also be at work will be considered.

SeminarNeuroscience

Bridging clinical and cognitive neuroscience together to investigate semantics, above and beyond language

Valentina Borghesani
University of Geneva, Switzerland & NCCR Evolving Language
Jan 20, 2023

We will explore how neuropsychology can be leveraged to directly test cognitive neuroscience theories using the case of frontotemporal dementias affecting the language network. Specifically, we will focus on pathological, neuroimaging, and cognitive data from primary progressive aphasia. We will see how they can help us investigate the reading network, semantic knowledge organisation, and grammatical categories processing. Time permitting, the end of the talk will cover the temporal dynamics of semantic dimensions recovery and the role played by the task.

SeminarNeuroscienceRecording

Geometry of concept learning

Haim Sompolinsky
The Hebrew University of Jerusalem and Harvard University
Jan 4, 2023

Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.

SeminarNeuroscienceRecording

Modelling metaphor comprehension as a form of analogizing

Gerard Steen
University of Amsterdam
Nov 30, 2022

What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.

SeminarNeuroscienceRecording

Do large language models solve verbal analogies like children do?

Claire Stevenson
University of Amsterdam
Nov 17, 2022

Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.

SeminarNeuroscience

Exploring emotion in the expression of ape gesture

Cat Hobaiter
University of St Andrews
Nov 8, 2022

Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins.  All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 5, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscience

Studying genetic overlap between ASD risk and related traits: From polygenic pleiotropy to disorder-specific profiles

Beate St Pourcain
Max Planck Institute for Psycholinguistics
Jun 15, 2022
SeminarNeuroscienceRecording

The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)

Elizabeth Jefferies
Department of Psychology, University of York, UK
May 25, 2022

Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.

SeminarNeuroscienceRecording

Children’s inference of verb meanings: Inductive, analogical and abductive inference

Mutsumi Imai
Keio University
May 18, 2022

Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.

SeminarNeuroscience

It’s not over our heads: Why human language needs a body

Michał B. Paradowski
Institute of Applied Linguistics, University of Warsaw
May 9, 2022

n the ‘orthodox’ view, cognition has been seen as manipulation of symbolic, mental representations, separate from the body. This dualist Cartesian approach characterised much of twentieth-century thought and is still taken for granted by many people today. Language, too, has for a long time been treated across scientific domains as a system operating largely independently from perception, action, and the body (articulatory-perceptual organs notwithstanding). This could lead one into believing that to emulate linguistic behaviour, it would suffice to develop ‘software’ operating on abstract representations that would work on any computational machine. Yet the brain is not the sole problem-solving resource we have at our disposal. The disembodied picture is inaccurate for numerous reasons, which will be presented addressing the issue of the indissoluble link between cognition, language, body, and environment in understanding and learning. The talk will conclude with implications and suggestions for pedagogy, relevant for disciplines as diverse as instruction in language, mathematics, and sports.

SeminarNeuroscience

Language Representations in the Human Brain: A naturalistic approach

Fatma Deniz
TU Berlin & Berkeley
Apr 27, 2022

Natural language is strongly context-dependent and can be perceived through different sensory modalities. For example, humans can easily comprehend the meaning of complex narratives presented through auditory speech, written text, or visual images. To understand how complex language-related information is represented in the human brain there is a necessity to map the different linguistic and non-linguistic information perceived under different modalities across the cerebral cortex. To map this information to the brain, I suggest following a naturalistic approach and observing the human brain performing tasks in its naturalistic setting, designing quantitative models that transform real-world stimuli into specific hypothesis-related features, and building predictive models that can relate these features to brain responses. In my talk, I will present models of brain responses collected using functional magnetic resonance imaging while human participants listened to or read natural narrative stories. Using natural text and vector representations derived from natural language processing tools I will present how we can study language processing in the human brain across modalities, in different levels of temporal granularity, and across different languages.

SeminarNeuroscienceRecording

The logopenic variant of primary progressive aphasia (lvPPA): language, cognitive, neuroradiological issues

Robert Rusina and Zsolt Cséfalvay
Thomayer University Hospital Videnska, Prague, Czech Republic; Comenius University, Bratislava, Slovakia
Apr 5, 2022
SeminarNeuroscience

An executive control approach to language production

Etienne Koechlin
École Normale Supérieure and INSERM, Paris, France
Apr 5, 2022

Language production is a form of behavior and as such involves executive control and the prefrontal function. The cognitive architecture of prefrontal executive function thus certainly plays an important role in shaping language production. In this talk, I will review the main features of the prefrontal executive function we have uncovered during the last two decades and I will discuss how these features may help understanding language production.

SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Mar 1, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscience

Dynamic structural neuroplasticity in the bilingual brain

Christos Pliatsikas
University of Reading, UK
Mar 1, 2022

Research on the effects of bilingualism on the structure of the brain has so far yielded variable patterns. Although it cannot be disputed that learning and using additional languages restructures the brain, the reported effects vary considerably, including both increases and reductions in grey matter volume and white matter diffusivity. This presentation reviews the available evidence and compares it to patterns from other domains of skill acquisition, culminating in the Dynamic Restructuring Model, a theory which synthesises the available evidence from the perspective of experience-based neuroplasticity. New corroborating evidence is also presented from healthy young and older bilinguals, and the presentation concludes with the implications of these effects for the ageing brain.

SeminarNeuroscienceRecording

What is Cognitive Neuropsychology Good For? An Unauthorized Biography

Alfonso Caramazza
Cognitive Neuropsychology Laboratory, Harvard University, USA; Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
Feb 23, 2022

Abstract: There is no doubt that the study of brain damaged individuals has contributed greatly to our understanding of the mind/brain. Within this broad approach, cognitive neuropsychology accentuates the cognitive dimension: it investigates the structure and organization of perceptual, motor, cognitive, and language systems – prerequisites for understanding the functional organization of the brain – through the analysis of their dysfunction following brain damage. Significant insights have come specifically from this paradigm. But progress has been slow and enthusiasm for this approach has waned somewhat in recent years, and the use of existing findings to constrain new theories has also waned. What explains the current diminished status of cognitive neuropsychology? One reason may be failure to calibrate expectations about the effective contribution of different subfields of the study of the mind/brain as these are determined by their natural peculiarities – such factors as the types of available observations and their complexity, opportunity of access to such observations, the possibility of controlled experimentation, and the like. Here, I also explore the merits and limitations of cognitive neuropsychology, with particular focus on the role of intellectual, pragmatic, and societal factors that determine scientific practice within the broader domains of cognitive science/neuroscience. I conclude on an optimistic note about the continuing unique importance of cognitive neuropsychology: although limited to the study of experiments of nature, it offers a privileged window into significant aspects of the mind/brain that are not easily accessible through other approaches. Biography: Alfonso Caramazza's research has focussed extensively on how words and their meanings are represented in the brain. His early pioneering studies helped to reformulate our thinking about Broca's aphasia (not limited to production) and formalised the logic of patient-based neuropsychology. More recently he has been instrumental in reconsidering popular claims about embodied cognition.

SeminarNeuroscience

Electrophysiological investigations of natural speech and language processing

Edmund Lalor
University of Rochester, USA
Feb 14, 2022
SeminarNeuroscienceRecording

How bilingualism modulates the neural mechanisms of selective attention

Mirjana Bozic
Department of Psychology, University of Cambridge
Feb 1, 2022

Learning and using multiple languages places considerable demands on our cognitive system, and has been shown to modulate the mechanisms of selective attention in both children and adults. Yet the nature of these adaptive changes is still not entirely clear. One possibility is that bilingualism boosts the capacity for selective attention; another is that it leads to a different distribution of this finite resource, aimed at supporting optimal performance under the increased processing demands. I will present a series of studies investigating the nature of modifications of selective attention in bilingualism. Using behavioural and neuroimaging techniques, our data confirm that bilingualism modifies the neural mechanisms of selective attention even in the absence of behavioural differences between monolinguals and bilinguals. They further suggest that, instead of enhanced attentional capacity, these neuroadaptive modifications appear to reflect its redistribution, arguably aimed at economising the available resources to support optimal behavioural performance.

SeminarNeuroscience

The pervasive role of visuospatial coding

Edward Silson
School of Philosophy, Psychology & Language Sciences, University of Edinburgh, UK
Feb 1, 2022

Historically, retinotopic organisation (the spatial mapping of the retina across the cortical surface) was considered the purview of early regions of visual cortex (V1-V4) only and that anterior, more cognitively involved regions abstracted this information away. The contemporary view is quite different. Here, with Advancing technologies and analysis methods, we see that retinotopic information is not simply thrown away by these regions but rather is maintained to the potential benefit of our broader cognition. This maintenance of visuospatial coding extends not only through visual cortex, but is present in parietal, frontal, medial and subcortical structures involved with coordinating-movements, mind-wandering and even memory. In this talk, I will outline some of the key empirical findings from my own work and the work of others that shaped this contemporary perspective.

SeminarNeuroscience

Towards an inclusive neurobiology of language

Esti Blanco Elorrieta
Department of Psychology, Harvard University, Cambridge, USA
Jan 28, 2022

Understanding how our brains process language is one of the fundamental issues in cognitive science. In order to reach such understanding, it is critical to cover the full spectrum of manners in which humans acquire and experience language. However, due to a myriad of socioeconomic factors, research has disproportionately focused on monolingual English speakers. In this talk, I present a series of studies that systematically target fundamental questions about bilingual language use across a range of conversational contexts, both in production and comprehension. The results lay the groundwork to propose a more inclusive theory of the neurobiology of language, with an architecture that assumes a common selection principle at each linguistic level and can account for attested features of both bilingual and monolingual speech in, but crucially also out of, experimental settings.

SeminarNeuroscienceRecording

NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks

Athena Akrami
University College London
Dec 2, 2021

The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.

SeminarNeuroscienceRecording

NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex

Aria Wang
Carnegie Mellon University
Dec 1, 2021

To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.

SeminarNeuroscienceRecording

Perceptual and neural basis of sound-symbolic crossmodal correspondences

Krish Sathian
Penn State Health Milton S. Hershey Medical Center, Pennsylvania State University
Oct 28, 2021
SeminarNeuroscienceRecording

Multisensory speech perception

Michael Beauchamp
University of Pennsylvania
Sep 16, 2021
SeminarNeuroscience

Exploring the neurogenetic basis of speech, language, and vocal communication

Sonja Vernes
Max Planck Institute for Psycholinguistics Nijmegen, The Netherlands
Sep 16, 2021

language coverage

50 items

Seminar50
Domain spotlight

Explore how language research is advancing inside Neuro.

Visit domain