faces
Latest
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
SWEBAGS conference 2024: Shared network mechanisms of dopamine and deep brain stimulation for the treatment of Parkinson’s disease: From modulation of oscillatory cortex – basal ganglia communication to intelligent clinical brain computer interfaces
Imagining and seeing: two faces of prosopagnosia
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Modeling idiosyncratic evaluation of faces
Deepfake Detection in Super-Recognizers and Police Officers
Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.
Recognizing Faces: Insights from Group and Individual Differences
Towards Human Systems Biology of Sleep/Wake Cycles: Phosphorylation Hypothesis of Sleep
The field of human biology faces three major technological challenges. Firstly, the causation problem is difficult to address in humans compared to model animals. Secondly, the complexity problem arises due to the lack of a comprehensive cell atlas for the human body, despite its cellular composition. Lastly, the heterogeneity problem arises from significant variations in both genetic and environmental factors among individuals. To tackle these challenges, we have developed innovative approaches. These include 1) mammalian next-generation genetics, such as Triple CRISPR for knockout (KO) mice and ES mice for knock-in (KI) mice, which enables causation studies without traditional breeding methods; 2) whole-body/brain cell profiling techniques, such as CUBIC, to unravel the complexity of cellular composition; and 3) accurate and user-friendly technologies for measuring sleep and awake states, exemplified by ACCEL, to facilitate the monitoring of fundamental brain states in real-world settings and thus address heterogeneity in human.
Vocal emotion perception at millisecond speed
The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Learning to see stuff
Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.
Representations of people in the brain
Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).
What do neurons want?
New Insights into the Neural Machinery of Face Recognition
Chemistry of the adaptive mind: lessons from dopamine
The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.
Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy
In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. This talk will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. Then we will look at some recent robots and chatbots with Eliza-like interfaces and examine their benefits as well as various ethical issues raised by deploying such systems.
Genetic-based brain machine interfaces for visual restoration
Visual restoration is certainly the greatest challenge for brain-machine interfaces with the high pixel number and high refreshing rate. In the recent year, we brought retinal prostheses and optogenetic therapy up to successful clinical trials. Concerning visual restoration at the cortical level, prostheses have shown efficacy for limited periods of time and limited pixel numbers. We are investigating the potential of sonogenetics to develop a non-contact brain machine interface allowing long-lasting activation of the visual cortex. The presentation will introduce our genetic-based brain machine interfaces for visual restoration at the retinal and cortical levels.
Deception, ExoNETs, SmushWare & Organic Data: Tech-facilitated neurorehabilitation & human-machine training
Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity. Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, moving the "smarts" of software into the hardware, and examining clinical effectiveness
Face Pareidolia: biases and the brain
Adaptive Deep Brain Stimulation: Investigational System Development at the Edge of Clinical Brain Computer Interfacing
Over the last few decades, the use of deep brain stimulation (DBS) to improve the treatment of those with neurological movement disorders represents a critical success story in the development of invasive neurotechnology and the promise of brain-computer interfaces (BCI) to improve the lives of those suffering from incurable neurological disorders. In the last decade, investigational devices capable of recording and streaming neural activity from chronically implanted therapeutic electrodes has supercharged research into clinical applications of BCI, enabling in-human studies investigating the use of adaptive stimulation algorithms to further enhance therapeutic outcomes and improve future device performance. In this talk, Dr. Herron will review ongoing clinical research efforts in the field of adaptive DBS systems and algorithms. This will include an overview of DBS in current clinical practice, the development of bidirectional clinical-use research platforms, ongoing algorithm evaluation efforts, a discussion of current adoption barriers to be addressed in future work.
Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia
NMC4 Short Talk: Decoding finger movements from human posterior parietal cortex
Restoring hand function is a top priority for individuals with tetraplegia. This challenge motivates considerable research on brain-computer interfaces (BCIs), which bypass damaged neural pathways to control paralyzed or prosthetic limbs. Here, we demonstrate the BCI control of a prosthetic hand using intracortical recordings from the posterior parietal cortex (PPC). As part of an ongoing clinical trial, two participants with cervical spinal cord injury were each implanted with a 96-channel array in the left PPC. Across four sessions each, we recorded neural activity while they attempted to press individual fingers of the contralateral (right) hand. Single neurons modulated selectively for different finger movements. Offline, we accurately classified finger movements from neural firing rates using linear discriminant analysis (LDA) with cross-validation (accuracy = 90%; chance = 17%). Finally, the participants used the neural classifier online to control all five fingers of a BCI hand. Online control accuracy (86%; chance = 17%) exceeded previous state-of-the-art finger BCIs. Furthermore, offline, we could classify both flexion and extension of the right fingers, as well as flexion of all ten fingers. Our results indicate that neural recordings from PPC can be used to control prosthetic fingers, which may help contribute to a hand restoration strategy for people with tetraplegia.
Neural network models of binocular depth perception
Our visual experience of living in a three-dimensional world is created from the information contained in the two-dimensional images projected into our eyes. The overlapping visual fields of the two eyes mean that their images are highly correlated, and that the small differences that are present represent an important cue to depth. Binocular neurons encode this information in a way that both maximises efficiency and optimises disparity tuning for the depth structures that are found in our natural environment. Neural network models provide a clear account of how these binocular neurons encode the local binocular disparity in images. These models can be expanded to multi-layer models that are sensitive to salient features of scenes, such as the orientations and discontinuities between surfaces. These deep neural network models have also shown the importance of binocular disparity for the segmentation of images into separate objects, in addition to the estimation of distance. These results demonstrate the usefulness of machine learning approaches as a tool for understanding biological vision.
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
Advancing Brain-Computer Interfaces by adopting a neural population approach
Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.
Learning to see Stuff
Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.
Interactions between visual cortical neurons that give rise to conscious perception
I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory and can be reported. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual brain. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.
What Art can tell us about the Brain
Artists have been doing experiments on vision longer than neurobiologists. Some major works of art have provided insights as to how we see; some of these insights are so undamental that they can be understood in terms of the underlying neurobiology. For example, artists have long realized that color and luminance can play independent roles in visual perception. Picasso said, "Colors are only symbols. Reality is to be found in luminance alone." This observation has a parallel in the functional subdivision of our visual systems, where color and luminance are processed by the evolutionarily newer, primate-specific What system, and the older, colorblind, Where (or How) system. Many techniques developed over the centuries by artists can be understood in terms of the parallel organization of our visual systems. I will explore how the segregation of color and luminance processing are the basis for why some Impressionist paintings seem to shimmer, why some op art paintings seem to move, some principles of Matisse's use of color, and how the Impressionists painted "air". Central and peripheral vision are distinct, and I will show how the differences in resolution across our visual field make the Mona Lisa's smile elusive, and produce a dynamic illusion in Pointillist paintings, Chuck Close paintings, and photomosaics. I will explore how artists have figured out important features about how our brains extract relevant information about faces and objects, and I will discuss why learning disabilities may be associated with artistic talent.
Multisensory speech perception
Brain-Machine Interfaces: Beyond Decoding
A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device. In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.
Face distortions as a window into face perception
Prosopometamorphopsia (PMO) is a disorder characterized by face perception distortions. People with PMO see facial features that appear to melt, stretch, and change size and position. I'll discuss research on PMO carried out by my lab and others that sheds light on the cognitive and neural organization of face perception. https://facedistortion.faceblind.org/
What are you looking at? Adventures in human gaze behaviour
Faces influence saccade programming
Several studies have showed that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the talk, I will mainly focus on a quite recent research done in our team that investigated to what extent face stimuli influence the programming and execution of saccades. In this research, two experiments were performed using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to execute a saccade toward the image (Experiment 1) or toward a cross added in the center of the image (Experiment 2) containing a target stimulus (a face or a vehicle). As expected participants were faster to execute a saccade toward a face than toward a vehicle and did less errors. We also observed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Further analyses, that I will detailed in the talk, showed that error saccades might be interrupted in mid-fight to initiate a concurrently programmed corrective saccade.
The neuroscience of color and what makes primates special
Among mammals, excellent color vision has evolved only in certain non-human primates. And yet, color is often assumed to be just a low-level stimulus feature with a modest role in encoding and recognizing objects. The rationale for this dogma is compelling: object recognition is excellent in grayscale images (consider black-and-white movies, where faces, places, objects, and story are readily apparent). In my talk I will discuss experiments in which we used color as a tool to uncover an organizational plan in inferior temporal cortex (parallel, multistage processing for places, faces, colors, and objects) and a visual-stimulus functional representation in prefrontal cortex (PFC). The discovery of an extensive network of color-biased domains within IT and PFC, regions implicated in high-level object vision and executive functions, compels a re-evaluation of the role of color in behavior. I will discuss behavioral studies prompted by the neurobiology that uncover a universal principle for color categorization across languages, the first systematic study of the color statistics of objects and a chromatic mechanism by which the brain may compute animacy, and a surprising paradoxical impact of memory on face color. Taken together, my talk will put forward the argument that color is not primarily for object recognition, but rather for the assessment of the likely behavioral relevance, or meaning, of the stuff we see.
Error correction and reliability timescale in converging cortical networks
Rapidly changing inputs such as visual scenes and auditory landscapes are transmitted over several synaptic interfaces and perceived with little loss of detail, but individual neurons are typically “noisy” and cortico-cortical connections are typically “weak”. To understand how information embodied in spike train is transmitted in a lossless manner, we focus on a single synaptic interface: between pyramidal cells and putative interneurons. Using arbitrary white noise patterns injected intra-cortically as photocurrents to freely-moving mice, we find that directly-activated cells exhibit precision of several milliseconds, but post-synaptic, indirectly-activated cells exhibit higher precision. Considering multiple identical messages, the reliability of directly-activated cells peaks at a timescale of dozens of milliseconds, whereas indirectly-activated cells exhibit an order-of-magnitude faster timescale. Using data-driven modelling, we find that error correction is consistent with non-linear amplification of coincident spikes.
Neuroimmune interactions in Cardiovascular Diseases
The nervous system and the immune system share the common ability to exert gatekeeper roles at the interfaces between internal and external environment. Although interaction between these two evolutionarily highly conserved systems is long recognized, the pathophysiological mechanisms regulating their reciprocal crosstalk in cardiovascular diseases became object of investigation only more recently. In the last years, our group elucidated how the autonomic nervous system controls the splenic immunity recruited by hypertensive challenges. In my talk, I will focus on the molecular mechanisms that regulate the neuro-immune crosstalk in hypertension. I will elaborate on the mechanistic insights into this brain-spleen axis led us uncover a new molecular pathway mediating the neuroimmune interaction established by noradrenergic-mediated release in the spleen of placental growth factor (PlGF), an angiogenic growth factor potentially targetable with pharmacological approaches.
Data-driven Artificial Social Intelligence: From Social Appropriateness to Fairness
Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial social and emotional intelligence of the current technology is still limited. My lab’s research has been pushing the state of the art in a wide spectrum of research topics in this area, including the design and creation of new datasets; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo, dyadic and group settings; designing longitudinal human-robot interaction studies for wellbeing; and investigating how to mitigate the bias that creeps into these systems. In this talk, I will present some of my research team’s explorations in these areas including social appropriateness of robot actions, virtual reality based cognitive training with affective adaptation, and bias and fairness in data-driven emotionally intelligent systems.
TA domain-general dynamic framework for social perception
Initial social perceptions are often thought to reflect direct “read outs” of facial features. Instead, we outline a perspective whereby initial perceptions emerge from an automatic yet gradual process of negotiation between the perceptual cues inherent to a person (e.g., facial cues) and top-down social cognitive processes harbored within perceivers. This perspective argues that perceivers’ social-conceptual knowledge in particular can have a fundamental structuring role in perceptions, and thus how we think about social groups, emotions, or personality traits helps determine how we visually perceive them in other people. Integrative evidence from real-time behavioral paradigms (e.g., mouse-tracking), multivariate fMRI, and computational modeling will be discussed. Together, this work shows that the way we use facial cues to categorize other people into social groups (e.g., gender, race), perceive their emotion (e.g., anger), or infer their personality (e.g., trustworthiness) are all fundamentally shaped by prior social-conceptual knowledge and stereotypical assumptions. We find that these top-down impacts on initial perceptions are driven by the interplay of higher-order prefrontal regions involved in top-down predictions and lower-level fusiform regions involved in face processing. We argue that the perception of social categories, emotions, and traits from faces can all be conceived as resulting from an integrated system relying on domain-general cognitive properties. In this system, both visual and social cognitive processes are in a close exchange, and initial social perceptions emerge in part out of the structure of social-conceptual knowledge.
Interactions between neurons during visual perception and restoring them in blindness
I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results also inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual cortex. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.
How do humans recognise faces? Insights from biological and artificial face recognition systems
Global visual salience of competing stimuli
Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.
The many faces of KCC2 in the generation and suppression of seizures
KCC2, best known as the neuron-specific chloride extruder that sets the strength and polarity of GABAergic Cl-currents, is a multifunctional molecule which interacts with other ion-regulatory proteins and (structurally) with the neuronal cytoskeleton. Its multiple roles in the generation and suppression of seizures have been widely studied. In my talk, I will address some fundamental issues which are relevant in this field of research: What are EGABA shifts about? What is the role of KCC2 in shunting inhibition? What is meant by “the balance between excitation and inhibition” and, in this context, by the “NKCC1/KCC2 ratio”? Is down-regulation of KCC2 following neuronal trauma a manifestation of adaptive or maladaptive ionic plasticity? Under what conditions is K-Cl cotransport by KCC2 promoting seizures? Should we pay more attention to KCC2 as molecule involved in dendritic spine formation in brain areas such as the hippocampus? Most of these points are of potential importance also in the design of KCC2-targeting drugs and genetic manipulations aimed at combating seizures.
Is it Autism or Alexithymia? explaining atypical socioemotional processing
Emotion processing is thought to be impaired in autism and linked to atypical visual exploration and arousal modulation to others faces and gaze, yet evidence is equivocal. We propose that, where observed, atypical socioemotional processing is due to alexithymia, a distinct but frequently co-occurring condition which affects emotional self-awareness and Interoception. In study 1 (N = 80), we tested this hypothesis by studying the spatio-temporal dynamics and entropy of eye-gaze during emotion processing tasks. Evidence from traditional and novel methods revealed that atypical eye-gaze and emotion recognition is best predicted by alexithymia in both autistic and non-autistic individuals. In Study 2 (N = 70), we assessed interoceptive and autonomic signals implicated in socioemotional processing, and found evidence for alexithymia (not autism) driven effects on gaze and arousal modulation to emotions. We also conducted two large-scale studies (N = 1300), using confirmatory factor-analytic and network modelling and found evidence that Alexithymia and Autism are distinct at both a latent level and their intercorrelations. We argue that: 1) models of socioemotional processing in autism should conceptualise difficulties as intrinsic to alexithymia, and 2) assessment of alexithymia is crucial for diagnosis and personalised interventions in autism.
From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories
Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.
A computational explanation for domain specificity in the human brain
Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains, and what is the origin of domain specificity in the first place? In this talk, I will present recent work testing whether the segregation of face and object perception in human brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that networks were able to perform both tasks well by spontaneously segregating them into distinct pathways. Critically, networks neither had prior knowledge nor any inductive bias about the tasks. Furthermore, networks optimized on tasks which apparently do not develop specialization in the human brain, such as food or cars, and object categorization showed less task segregation. These results suggest that functional segregation can spontaneously emerge without a task-specific bias, and that the domain-specific organization of the cortex may reflect a computational optimization for the real-world tasks humans solve.
The Gist of False Memory
It has long been known that when viewing a set of images, we misjudge individual elements as being closer to the mean than they are (Hollingworth, 1910) and recall seeing the (absent) set mean (Deese, 1959; Roediger & McDermott (1995). Recent studies found that viewing sets of images, simultaneously or sequentially, leads to perception of set statistics (mean, range) with poor memory for individual elements. Ensemble perception was found for sets of simple images (e.g. circles varying in size or brightness; lines of varying orientation), complex objects (e.g. faces of varying emotion), as well as for objects belonging to the same category. When the viewed set does not include its mean or prototype, nevertheless, observers report and act as if they have seen this central image or object – a form of false memory. Physiologically, detailed sensory information at cortical input levels is processed hierarchically to form an integrated scene gist at higher levels. However, we are aware of the gist before the details. We propose that images and objects belonging to a set or category are represented as their gist, mean or prototype, plus individual differences from that gist. Under constrained viewing conditions, only the gist is perceived and remembered. This theory also provides a basis for compressed neural representation. Extending this theory to scenes and episodes supplies a generalized basis for false memories. They seem right, match generalized expectations, so are believable without challenging examination. This theory could be tested by analyzing the typicality of false memories, compared to rejected alternatives.
Towards a speech neuroprosthesis
I will review advances in understanding the cortical encoding of speech-related oral movements. These discoveries are being translated to develop algorithms to decode speech from population neural activity.
Motor BMIs for probing sensorimotor control and parsing distributed learning
Brain-machine interfaces (BMIs) change how the brain sends and receives information from the environment, opening new ways to probe brain function. For instance, motor BMIs allow us to precisely define and manipulate the sensorimotor loop which has enabled new insights into motor control and learning. In this talk, I’ll first present an example study where sensory-motor loop manipulations in BMI allowed us to probe feed-forward and feedback control mechanisms in ways that are not possible in the natural motor system. This study shed light on sensorimotor processing, and in turn led to state-of-the-art neural interface performance. I’ll then survey recent work that highlights the likelihood that BMIs, much like natural motor learning, engages multiple distributed learning mechanisms that can be carefully interrogated with BMI.
Neural Population Perspectives on Learning and Motor Control
Learning is a population phenomenon. Since it is the organized activity of populations of neurons that cause movement, learning a new skill must involve reshaping those population activity patterns. Seeing how the brain does this has been elusive, but a brain-computer interface approach can yield new insight. We presented monkeys with novel BCI mappings that we knew would be difficult for them to learn how to control. Over several days, we observed the emergence of new patterns of neural activity that endowed the animals with the ability to perform better at the BCI task. We speculate that there also exists a direct relationship between new patterns of neural activity and new abilities during natural movements, but it is much harder to see in that setting.
Understanding sensorimotor control at global and local scales
The brain is remarkably flexible, and appears to instantly reconfigure its processing depending on what’s needed to solve a task at hand: fMRI studies indicate that distal brain areas appear to fluidly couple and decouple with one another depending on behavioral context. We investigated how the brain coordinates its activity across areas to inform complex, top-down control behaviors. Animals were trained to perform a novel brain machine interface task to guide a visual cursor to a reward zone, using activity recorded with widefield calcium imaging. This allowed us to screen for cortical areas implicated in causal neural control of the visual object. Animals could decorrelate normally highly-correlated areas to perform the task, and used an explore-exploit search in neural activity space to discover successful strategies. Higher visual and parietal areas were more active during the task in expert animals. Single unit recordings targeted to these areas indicated that the sensory representation of an object was sensitive to an animal’s subjective sense of controlling it.
Adaptive brain-computer interfaces based on error-related potentials and reinforcement learning
Bernstein Conference 2024
Identifying cortical learning algorithms using Brain-Machine Interfaces
Bernstein Conference 2024
Learning static and motion cues to material by predicting moving surfaces
COSYNE 2022
Learning static and motion cues to material by predicting moving surfaces
COSYNE 2022
Stabilizing brain-computer interfaces through nonlinear manifold alignment with dynamics
COSYNE 2022
Stabilizing brain-computer interfaces through nonlinear manifold alignment with dynamics
COSYNE 2022
Compact neural representations in co-adaptive Brain-Computer Interfaces
COSYNE 2023
Thoughtful faces: Using facial features to infer naturalistic cognitive processing across species
COSYNE 2023
Activity exploration influences learning speeds in models of brain-computer interfaces
COSYNE 2025
Towards generalizable, real-time decoders for brain-computer interfaces
COSYNE 2025
Carbon-based neural interfaces to probe retinal and cortical circuits with functional ultrasound imaging in vivo
FENS Forum 2024
Cortical layer-specific repetition suppression to faces in the fusiform face area
FENS Forum 2024
An event-based data compressive telemetry for high-bandwidth intracortical brain-computer interfaces
FENS Forum 2024
The Janus faces of nanoparticles at the neurovascular unit: A double-edged sword in neurodegeneration
FENS Forum 2024
Neonatal white matter microstructure predicts attention disengagement from fearful faces at 8 months
FENS Forum 2024
The role of the mean diffusivity of the amygdala in the perception of emotional faces in 8-month-old infants
FENS Forum 2024
Single-unit responses to dynamic salient negative faces in the human medial temporal lobe
FENS Forum 2024
Sleepless nights, vanishing faces: The effect of sleep deprivation on long-term social recognition memory in mice
FENS Forum 2024
faces coverage
68 items