Brain Recordings
brain recordings
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
The 3 Cs: Collaborating to Crack Consciousness
Every day when we fall asleep we lose consciousness, we are not there. And then, every morning, when we wake up, we regain it. What mechanisms give rise to consciousness, and how can we explain consciousness in the realm of the physical world of atoms and matter? For centuries, philosophers and scientists have aimed to crack this mystery. Much progress has been made in the past decades to understand how consciousness is instantiated in the brain, yet critical questions remain: can we develop a consciousness meter? Are computers conscious? What about other animals and babies? We have embarked in a large-scale, multicenter project to test, in the context of an open science, adversarial collaboration, two of the most prominent theories: Integrated information theory (IIT) and Global Neuronal Workspace (GNW) theory. We are collecting over 500 datasets including invasive and non-invasive recordings of the human brain, i.e.. fMRI, MEG and ECoG. We hope this project will enable theory-driven discoveries and further explorations that will help us better understand how consciousness fits inside the human brain.
Revealing the neural basis of human memory with direct recordings of place and grid cells and traveling waves
The ability to remember spatial environments is critical for everyday life. In this talk, I will discuss my lab’s findings on how the human brain supports spatial memory and navigation based on our experiments with direct brain recordings from neurosurgical patients performing virtual-reality spatial memory tasks. I will show that humans have a network of neurons that represent where we are located and trying to go. This network includes some cell types that are similar to those seen in animals, such as place and grid cells, as well as others that have not been seen before in animals, such as anchor and spatial-target cells. I also will explore the role of network oscillations in human memory, where humans again show several distinctive patterns compared to animals. Whereas rodents generally show a hippocampal oscillation at ~8Hz, humans have two separate hippocampal oscillations, at low and high frequencies, which support memory and navigation, respectively. Finally, I will show that neural oscillations in humans are traveling waves, propagating across the cortex, to coordinate the timing of neuronal activity across regions, which is another property not seen in animals. A theme from this work is that in terms of navigation and memory the human brain has novel characteristics compared with animals, which helps explain our rich behavioural abilities and has implications for treating disease and neurological disorders.
Deep Neural Imputation: A Framework for Recovering Incomplete Brain Recordings
COSYNE 2023
Describing neural encoding from large-scale brain recordings: A deep learning model of the central auditory system
FENS Forum 2024
Human local field potential brain recordings during a multilingual battery of cognitive and eye-tracking task performance
FENS Forum 2024