Brain
brain representation
Connecting structure and function in early visual circuits
How does the brain interpret signals from the outside world? Walking through a park, you might take for granted the ease with which you can understand what you see. Rather than seeing a series of still snapshots, you are able to see simple, fluid movement — of dogs running, squirrels foraging, or kids playing basketball. You can track their paths and know where they are headed without much thought. “How does this process take place?” asks Rudy Behnia, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “For most of us, it’s hard to imagine a world where we can’t see motion, shapes, and color; where we can’t have a representation of the physical world in our head.” And yet this representation does not happen automatically — our brain has no direct connection with the outside world. Instead, it interprets information taken in by our senses. Dr. Behnia is studying how the brain builds these representations. As a starting point, she focuses on how we see motion
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
Sensorimotor -independent brain representations in association cortices
How flexible are association cortices? I will present a series of fMRI experiments addressing this question by investigating individuals born without hands, who use their feet as effectors to perform everyday actions. These results suggest that computations in association cortices are abstracted from visuomotor features and experience, similarly to the visual -independence of the association networks in people born blind, highlighting these regions’ ability to compensate for experience in any specific modality. These findings also open new avenues to utilize effector-independence in the action system for motor rehabilitation.
Do deep learning latent spaces resemble human brain representations?
In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.
Uncertainty in perceptual decision-making
Whether we are deciding about Covid-related restrictions, estimating a ball’s trajectory when playing tennis, or interpreting radiological images – most any choice we make is based on uncertain evidence. How do we infer that information is more or less reliable when making these decisions? How does the brain represent knowledge of this uncertainty? In this talk, I will present recent neuroimaging data combined with novel analysis tools to address these questions. Our results indicate that sensory uncertainty can reliably be estimated from the human visual cortex on a trial-by-trial basis, and moreover that observers appear to rely on this uncertainty when making perceptual decisions.