TopicNeuroscience

natural language processing

Content Overview
6Total items
5Seminars
1ePoster

Latest

SeminarNeuroscience

Deep language models as a cognitive model for natural language processing in the human brain

Uri Hasson
Princeton University
Dec 7, 2023
SeminarNeuroscience

Language Representations in the Human Brain: A naturalistic approach

Fatma Deniz
TU Berlin & Berkeley
Apr 27, 2022

Natural language is strongly context-dependent and can be perceived through different sensory modalities. For example, humans can easily comprehend the meaning of complex narratives presented through auditory speech, written text, or visual images. To understand how complex language-related information is represented in the human brain there is a necessity to map the different linguistic and non-linguistic information perceived under different modalities across the cerebral cortex. To map this information to the brain, I suggest following a naturalistic approach and observing the human brain performing tasks in its naturalistic setting, designing quantitative models that transform real-world stimuli into specific hypothesis-related features, and building predictive models that can relate these features to brain responses. In my talk, I will present models of brain responses collected using functional magnetic resonance imaging while human participants listened to or read natural narrative stories. Using natural text and vector representations derived from natural language processing tools I will present how we can study language processing in the human brain across modalities, in different levels of temporal granularity, and across different languages.

SeminarNeuroscienceRecording

Do deep learning latent spaces resemble human brain representations?

Rufin VanRullen
Centre de Recherche Cerveau et Cognition (CERCO)
Mar 13, 2021

In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.

SeminarNeuroscienceRecording

Machine Learning as a tool for positive impact : case studies from climate change

Alexandra (Sasha) Luccioni
University of Montreal and Mila (Quebec Institute for Learning Algorithms)
Dec 10, 2020

Climate change is one of our generation's greatest challenges, with increasingly severe consequences on global ecosystems and populations. Machine Learning has the potential to address many important challenges in climate change, from both mitigation (reducing its extent) and adaptation (preparing for unavoidable consequences) aspects. To present the extent of these opportunities, I will describe some of the projects that I am involved in, spanning from generative model to computer vision and natural language processing. There are many opportunities for fundamental innovation in this field, advancing the state-of-the-art in Machine Learning while ensuring that this fundamental progress translates into positive real-world impact.

SeminarNeuroscienceRecording

Abstraction and Analogy in Natural and Artificial Intelligence

Melanie Mitchell
Santa Fe Institute
Oct 8, 2020

In 1955, John McCarthy and colleagues proposed an AI summer research project with the following aim: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” More than six decades later, all of these research topics remain open and actively investigated in the AI community. While AI has made dramatic progress over the last decade in areas such as vision, natural language processing, and robotics, current AI systems still almost entirely lack the ability to form humanlike concepts and abstractions. Some cognitive scientists have proposed that analogy-making is a central mechanism for conceptual abstraction and understanding in humans. Douglas Hofstadter called analogy-making “the core of cognition”, and Hofstadter and co-author Emmanuel Sander noted, “Without concepts there can be no thought, and without analogies there can be no concepts.” In this talk I will reflect on the role played by analogy-making at all levels of intelligence, and on prospects for developing AI systems with humanlike abilities for abstraction and analogy.

ePosterNeuroscience

Automating Conference Abstract Organization at Scale with Natural Language Processing

Panos A. Bozelos and Tim P. Vogels

natural language processing coverage

6 items

Seminar5
ePoster1

Share your knowledge

Know something about natural language processing? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how natural language processing research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.