Face
face
sensorimotor control, mouvement, touch, EEG
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images
Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition
We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.
Expanding mechanisms and therapeutic targets for neurodegenerative disease
A hallmark pathological feature of the neurodegenerative diseases amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is the depletion of RNA-binding protein TDP-43 from the nucleus of neurons in the brain and spinal cord. A major function of TDP-43 is as a repressor of cryptic exon inclusion during RNA splicing. By re-analyzing RNA-sequencing datasets from human FTD/ALS brains, we discovered dozens of novel cryptic splicing events in important neuronal genes. Single nucleotide polymorphisms in UNC13A are among the strongest hits associated with FTD and ALS in human genome-wide association studies, but how those variants increase risk for disease is unknown. We discovered that TDP-43 represses a cryptic exon-splicing event in UNC13A. Loss of TDP-43 from the nucleus in human brain, neuronal cell lines and motor neurons derived from induced pluripotent stem cells resulted in the inclusion of a cryptic exon in UNC13A mRNA and reduced UNC13A protein expression. The top variants associated with FTD or ALS risk in humans are located in the intron harboring the cryptic exon, and we show that they increase UNC13A cryptic exon splicing in the face of TDP-43 dysfunction. Together, our data provide a direct functional link between one of the strongest genetic risk factors for FTD and ALS (UNC13A genetic variants), and loss of TDP-43 function. Recent analyses have revealed even further changes in TDP-43 target genes, including widespread changes in alternative polyadenylation, impacting expression of disease-relevant genes (e.g., ELP1, NEFL, and TMEM106B) and providing evidence that alternative polyadenylation is a new facet of TDP-43 pathology.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
SWEBAGS conference 2024: Shared network mechanisms of dopamine and deep brain stimulation for the treatment of Parkinson’s disease: From modulation of oscillatory cortex – basal ganglia communication to intelligent clinical brain computer interfaces
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
Imagining and seeing: two faces of prosopagnosia
Face matching and decision making: The influence of framing, task presentation and criterion placement
Many situations rely on the accurate identification of people with whom we are unfamiliar. For example, security at airports or in police investigations require the identification of individuals from photo-ID. Yet, the identification of unfamiliar faces is error prone, even for practitioners who routinely perform this task. Indeed, even training protocols often yield no discernible improvement. The challenge of unfamiliar face identification is often thought of as a perceptual problem; however, this assumption ignores the potential role of decision-making and its contributing factors (e.g., criterion placement). In this talk, I am going to present a series of experiments that investigate the role of decision-making in face identification.
A modular, free and open source graphical interface for visualizing and processing electrophysiological signals in real-time
Portable biosensors become more popular every year. In this context, I propose NeuriGUI, a modular and cross-platform graphical interface that connects to those biosensors for real-time processing, exploring and storing of electrophysiological signals. The NeuriGUI acts as a common entry point in brain-computer interfaces, making it possible to plug in downstream third-party applications for real-time analysis of the incoming signal. NeuriGUI is 100% free and open source.
How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection
I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Combined electrophysiological and optical recording of multi-scale neural circuit dynamics
This webinar will showcase new approaches for electrophysiological recordings using our silicon neural probes and surface arrays combined with diverse optical methods such as wide-field or 2-photon imaging, fiber photometry, and optogenetic perturbations in awake, behaving mice. Multi-modal recording of single units and local field potentials across cortex, hippocampus and thalamus alongside calcium activity via GCaMP6F in cortical neurons in triple-transgenic animals or in hippocampal astrocytes via viral transduction are brought to bear to reveal hitherto inaccessible and under-appreciated aspects of coordinated dynamics in the brain.
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Modeling idiosyncratic evaluation of faces
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Deepfake Detection in Super-Recognizers and Police Officers
Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.
Recognizing Faces: Insights from Group and Individual Differences
Towards Human Systems Biology of Sleep/Wake Cycles: Phosphorylation Hypothesis of Sleep
The field of human biology faces three major technological challenges. Firstly, the causation problem is difficult to address in humans compared to model animals. Secondly, the complexity problem arises due to the lack of a comprehensive cell atlas for the human body, despite its cellular composition. Lastly, the heterogeneity problem arises from significant variations in both genetic and environmental factors among individuals. To tackle these challenges, we have developed innovative approaches. These include 1) mammalian next-generation genetics, such as Triple CRISPR for knockout (KO) mice and ES mice for knock-in (KI) mice, which enables causation studies without traditional breeding methods; 2) whole-body/brain cell profiling techniques, such as CUBIC, to unravel the complexity of cellular composition; and 3) accurate and user-friendly technologies for measuring sleep and awake states, exemplified by ACCEL, to facilitate the monitoring of fundamental brain states in real-world settings and thus address heterogeneity in human.
Sensory Consequences of Visual Actions
We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.
Perceptions of responsiveness and rejection in romantic relationships. What are the implications for individuals and relationship functioning?
From birth, human beings need to be embedded into social ties to function best, because other individuals can provide us with a sense of belonging, which is a fundamental human need. One of the closest bonds we build throughout our life is with our intimate partners. When the relationship involves intimacy and when both partners accept and support each other’s needs and goals (through perceived responsiveness) individuals experience an increase in relationship satisfaction as well as physical and mental well-being. However, feeling rejected by a partner may impair the feeling of connectedness and belonging, and affect emotional and behavioural responses. When we perceive our partner to be responsive to our needs or desires, in turn we naturally strive to respond positively and adequately to our partner’s needs and desires. This implies that individuals are interdependent, and changes in one partner prompt changes in the other. Evidence suggests that partners regulate themselves and co-regulate each other in their emotional, psychological, and physiological responses. However, such processes may threaten the relationship when partners face stressful situations or interactions, like the transition to parenthood or rejection. Therefore, in this presentation, I will provide evidence for the role of perceptions of being accepted or rejected by a significant other on individual and relationship functioning, while considering the contextual settings. The three studies presented here explore romantic relationships, and how perceptions of rejection and responsiveness from the partner impact both individuals, their physiological and their emotional responses, as well as their relationship dynamics.
Investigating face processing impairments in Developmental Prosopagnosia: Insights from behavioural tasks and lived experience
The defining characteristic of development prosopagnosia is severe difficulty recognising familiar faces in everyday life. Numerous studies have reported that the condition is highly heterogeneous in terms of both presentation and severity with many mixed findings in the literature. I will present behavioural data from a large face processing test battery (n = 24 DPs) as well as some early findings from a larger survey of the lived experience of individuals with DP and discuss how insights from individuals' real-world experience can help to understand and interpret lab-based data.
State-of-the-Art Spike Sorting with SpikeInterface
This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.
Effect of nutrient sensing by microglia on mouse behavior
Microglia are the brain macrophages, eliciting multifaceted functions to maintain brain homeostasis across lifetime. To achieve this, microglia are able to sense a plethora of signals in their close environment. In the lab, we investigate the effect of nutrients on microglia function for several reasons: 1) Microglia express all the cellular machinery required to sense nutrients; 2) Eating habits have changed considerably over the last century, towards diets rich in fats and sugars; 3) This so-called "Western diet" is accompanied by an increase in the occurrence of neuropathologies, in which microglia are known to play a role. In my talk, I will present data showing how variations in nutrient intake alter microglia function, including exacerbation of synaptic pruning, with profound consequences for neuronal activity and behavior. I will also show unpublished data on the mechanisms underlying the effects of nutrients on microglia, notably through the regulation of their metabolic activity.
Vocal emotion perception at millisecond speed
The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.
The contribution of mental face representations to individual face processing abilities
People largely differ with respect to how well they can learn, memorize, and perceive faces. In this talk, I address two potential sources of variation. One factor might be people’s ability to adapt their perception to the kind of faces they are currently exposed to. For instance, some studies report that those who show larger adaptation effects are also better at performing face learning and memory tasks. Another factor might be people’s sensitivity to perceive fine differences between similar-looking faces. In fact, one study shows that the brain of good performers in a face memory task shows larger neural differences between similar-looking faces. Capitalizing on this body of evidence, I present a behavioural study where I explore the relationship between people’s perceptual adaptability and sensitivity and their individual face processing performance.
Workplace Experiences of LGBTQIA+ Academics in Psychology, Psychiatry, and Neuroscience
In this webinar, Dr David Pagliaccio discusses the findings of his recent pre-print on workplace bias and discrimination faced by LGBTQIA+ brain scientists in the US.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Prosody in the voice, face, and hands changes which words you hear
Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Euclidean coordinates are the wrong prior for primate vision
The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.
A new science of emotion: How brain-mind-body processes form functional neurological disorder
One of the most common medical conditions you’ve (maybe) never heard of – functional neurological disorder – lays at the interface of neurology and psychiatry and offers a window into fundamental brain-mind-body processes. Across ancient and modern times, functional neurological disorder has had a long and tumultuous history, with an evolving debate and understanding of how biopsychosocial factors contribute to the manifestation of the disorder. A central issue in contemporary discussions has revolved around questioning the extent to which emotions play a mechanistic and aetiological role in functional neurological disorder. Critical in this context, however, is that this ongoing debate has largely omitted the question of what emotions are in the first place. This talk first brings together advances in the understanding of working principles of the brain fundamental to introducing a new understanding of what emotions are. Building on recent theoretical frameworks from affective neuroscience, the idea of how the predictive process of emotion construction can be an integral component of the pathophysiology of functional neurological disorder is discussed.
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time
Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.
Understanding and Mitigating Bias in Human & Machine Face Recognition
With the increasing use of automated face recognition (AFR) technologies, it is important to consider whether these systems not only perform accurately, but also equitability or without “bias”. Despite rising public, media, and scientific attention to this issue, the sources of bias in AFR are not fully understood. This talk will explore how human cognitive biases may impact our assessments of performance differentials in AFR systems and our subsequent use of those systems to make decisions. We’ll also show how, if we adjust our definition of what a “biased” AFR algorithm looks like, we may be able to create algorithms that optimize the performance of a human+algorithm team, not simply the algorithm itself.
Learning to see stuff
Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.
Automated generation of face stimuli: Alignment, features and face spaces
I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.
When to stop immune checkpoint inhibitor for malignant melanoma? Challenges in emulating target trials
Observational data have become a popular source of evidence for causal effects when no randomized controlled trial exists, or to supplement information provided by those. In practice, a wide range of designs and analytical choices exist, and one recent approach relies on the target trial emulation framework. This framework is particularly well suited to mimic what could be obtained in a specific randomized controlled trial, while avoiding time-related selection biases. In this abstract, we present how this framework could be useful to emulate trials in malignant melanoma, and the challenges faced when planning such a study using longitudinal observational data from a cohort study. More specifically, two questions are envisaged: duration of immune checkpoint inhibitors, and trials comparing treatment strategies for BRAF V600-mutant patients (targeted therapy as 1st line, followed by immunotherapy as 2nd line, vs. immunotherapy as 2nd line followed by targeted therapy as 1st line). Using data from 1027 participants to the MELBASE cohort, we detail the results for the emulation of a trial where immune checkpoint inhibitor would be stopped at 6 months vs. continued, in patients in response or with stable disease.
Implications of Vector-space models of Relational Concepts
Vector-space models are used frequently to compare similarity and dimensionality among entity concepts. What happens when we apply these models to relational concepts? What is the evidence that such models do apply to relational concepts? If we use such a model, then one implication is that maximizing surface feature variation should improve relational concept learning. For example, in STEM instruction, the effectiveness of teaching by analogy is often limited by students’ focus on superficial features of the source and target exemplars. However, in contrast to the prediction of the vector-space computational model, the strategy of progressive alignment (moving from perceptually similar to different targets) has been suggested to address this issue (Gentner & Hoyos, 2017), and human behavioral evidence has shown benefits from progressive alignment. Here I will present some preliminary data that supports the computational approach. Participants were explicitly instructed to match stimuli based on relations while perceptual similarity of stimuli varied parametrically. We found that lower perceptual similarity reduced accurate relational matching. This finding demonstrates that perceptual similarity may interfere with relational judgements, but also hints at why progressive alignment maybe effective. These are preliminary, exploratory data and I to hope receive feedback on the framework and to start a discussion in a group on the utility of vector-space models for relational concepts in general.
What's wrong with the prosopagnosia literature? A new approach to diagnosing and researching the condition
Developmental prosopagnosia is characterised by severe, lifelong difficulties when recognising facial identity. Most researchers require prosopagnosia cases exhibit ultra-conservative levels of impairment on the Cambridge Face Memory Test before they include them in their experiments. This results in the majority of people who believe that they have this condition being excluded from the scientific literature. In this talk I outline the many issues that will afflict prosopagnosia research if this continues, and show that these excluded cases do exhibit impairments on all commonly used diagnostic tests when a group-based method of assessment is utilised. I propose a paradigm shift away from cognitive task-based approaches to diagnosing prosopagnosia, and outline a new way that researchers can investigate this condition.
Modelling metaphor comprehension as a form of analogizing
What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
Representations of people in the brain
Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
NEW TREATMENTS FOR PAIN: Unmet needs and how to meet them
“Of pain you could wish only one thing: that it should stop. Nothing in the world was so bad as physical pain. In the face of pain there are no heroes.- George Orwell, ‘1984’ " "Neuroscience has revealed the secrets of the brain and nervous system to an extent that was beyond the realm of imagination just 10-20 years ago, let alone in 1949 when Orwell wrote his prophetic novel. Understanding pain, however, presents a unique challenge to academia, industry and medicine, being both a measurable physiological process as well as deeply personal and subjective. Given the millions of people who suffer from pain every day, wishing only, “that it should stop”, the need to find more effective treatments cannot be understated." "‘New treatments for pain’ will bring together approximately 120 people from the commercial, academic, and not-for-profit sectors to share current knowledge, identify future directions, and enable collaboration, providing delegates with meaningful and practical ways to accelerate their own work into developing treatments for pain.
What do neurons want?
Disentangling neural correlates of consciousness and task relevance using EEG and fMRI
How does our brain generate consciousness, that is, the subjective experience of what it is like to see face or hear a sound? Do we become aware of a stimulus during early sensory processing or only later when information is shared in a wide-spread fronto-parietal network? Neural correlates of consciousness are typically identified by comparing brain activity when a constant stimulus (e.g., a face) is perceived versus not perceived. However, in most previous experiments, conscious perception was systematically confounded with post-perceptual processes such as decision-making and report. In this talk, I will present recent EEG and fMRI studies dissociating neural correlates of consciousness and task-related processing in visual and auditory perception. Our results suggest that consciousness emerges during early sensory processing, while late, fronto-parietal activity is associated with post-perceptual processes rather than awareness. These findings challenge predominant theories of consciousness and highlight the importance of considering task relevance as a confound across different neuroscientific methods, experimental paradigms and sensory modalities.
Odd dynamics of living chiral crystals
The emergent dynamics exhibited by collections of living organisms often shows signatures of symmetries that are broken at the single-organism level. At the same time, organism development itself encompasses a well-coordinated sequence of symmetry breaking events that successively transform a single, nearly isotropic cell into an animal with well-defined body axis and various anatomical asymmetries. Combining these key aspects of collective phenomena and embryonic development, we describe here the spontaneous formation of hydrodynamically stabilized active crystals made of hundreds of starfish embryos that gather during early development near fluid surfaces. We describe a minimal hydrodynamic theory that is fully parameterized by experimental measurements of microscopic interactions among embryos. Using this theory, we can quantitatively describe the stability, formation and rotation of crystals and rationalize the emergence of mechanical properties that carry signatures of an odd elastic material. Our work thereby quantitatively connects developmental symmetry breaking events on the single-embryo level with remarkable macroscopic material properties of a novel living chiral crystal system.
Active mechanics of sea star oocytes
The cytoskeleton has the remarkable ability to self-organize into active materials which underlie diverse cellular processes ranging from motility to cell division. Actomyosin is a canonical example of an active material, which generates cellularscale contractility in part through the forces exerted by myosin motors on actin filaments. While the molecular players underlying actomyosin contractility have been well characterized, how cellular-scale deformation in disordered actomyosin networks emerges from filament-scale interactions is not well understood. In this talk, I’ll present work done in collaboration with Sebastian Fürthauer and Nikta Fakhri addressing this question in vivo using the meiotic surface contraction wave seen in oocytes of the bat star Patiria miniata as a model system. By perturbing actin polymerization, we find that the cellular deformation rate is a nonmonotonic function of cortical actin density peaked near the wild type density. To understand this, we develop an active fluid model coarse-grained from filament-scale interactions and find quantitative agreement with the measured data. The model makes further predictions, including the surprising prediction that deformation rate decreases with increasing motor concentration. We test these predictions through protein overexpression and find quantitative agreement. Taken together, this work is an important step for bridging the molecular and cellular length scales for cytoskeletal networks in vivo.
Neuroscience of socioeconomic status and poverty: Is it actionable?
SES neuroscience, using imaging and other methods, has revealed generalizations of interest for population neuroscience and the study of individual differences. But beyond its scientific interest, SES is a topic of societal importance. Does neuroscience offer any useful insights for promoting socioeconomic justice and reducing the harms of poverty? In this talk I will use research from my own lab and others’ to argue that SES neuroscience has the potential to contribute to policy in this area, although its application is premature at present. I will also attempt to forecast the ways in which practical solutions to the problems of poverty may emerge from SES neuroscience. Bio: Martha Farah has conducted groundbreaking research on face and object recognition, visual attention, mental imagery, and semantic memory and - in more recent times - has been at the forefront of interdisciplinary research into neuroscience and society. This deals with topics such as using fMRI for lie detection, ethics of cognitive enhancement, and effects of social deprivation on brain development.
New Insights into the Neural Machinery of Face Recognition
How Children Discover Mathematical Structure through Relational Mapping
A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.
Peripersonal space (PPS) as a primary interface for self-environment interactions
Peripersonal space (PPS) defines the portion of space where interactions between our body and the external environment more likely occur. There is no physical boundary defining the PPS with respect to the extrapersonal space, but PPS is continuously constructed by a dedicated neural system integrating external stimuli and tactile stimuli on the body, as a function of their potential interaction. This mechanism represents a primary interface between the individual and the environment. In this talk, I will present most recent evidence and highlight the current debate about the neural and computational mechanisms of PPS, its main functions and properties. I will discuss novel data showing how PPS dynamically shapes to optimize body-environment interactions. I will describe a novel electrophysiological paradigm to study and measure PPS, and show how this has been used to search for a basic marker of potentials of self-environment interaction in newborns and patients with disorders of consciousness. Finally, I will discuss how PPS is also involved in, and in turn shaped by, social interactions. Under these acceptances, I will discuss how PPS plays a key role in self-consciousness.
The role of top-down mechanisms in gaze perception
Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.
Membrane mechanics meet minimal manifolds
Changes in the geometry and topology of self-assembled membranes underlie diverse processes across cellular biology and engineering. Similar to lipid bilayers, monolayer colloidal membranes studied by the Sharma (IISc Bangalore) and Dogic (UCSB) Labs have in-plane fluid-like dynamics and out-of-plane bending elasticity, but their open edges and micron length scale provide a tractable system to study the equilibrium energetics and dynamic pathways of membrane assembly and reconfiguration. First, we discuss how doping colloidal membranes with short miscible rods transforms disk-shaped membranes into saddle-shaped minimal surfaces with complex edge structures. Theoretical modeling demonstrates that their formation is driven by increasing positive Gaussian modulus, which in turn is controlled by the fraction of short rods. Further coalescence of saddle-shaped surfaces leads to exotic topologically distinct structures, including shapes similar to catenoids, tri-noids, four-noids, and higher order structures. We then mathematically explore the mechanics of these catenoid-like structures subject to an external axial force and elucidate their intimate connection to two problems whose solutions date back to Euler: the shape of an area-minimizing soap film and the buckling of a slender rod under compression. A perturbation theory argument directly relates the tensions of membranes to the stability properties of minimal surfaces. We also investigate the effects of including a Gaussian curvature modulus, which, for small enough membranes, causes the axial force to diverge as the ring separation approaches its maximal value.
Adaptive brain-computer interfaces based on error-related potentials and reinforcement learning
Bernstein Conference 2024
An Attention-based Multimodal Decoder for Hybrid Brain-Computer Interface Control Systems
Bernstein Conference 2024
Calcium imaging-based brain-computer interface in freely behaving mice
Bernstein Conference 2024
Efficient cortical spike train decoding for brain-machine interface implants with recurrent spiking neural networks
Bernstein Conference 2024
Identifying cortical learning algorithms using Brain-Machine Interfaces
Bernstein Conference 2024
A brain-computer interface in prefrontal cortex that suppresses neural variability
COSYNE 2022
A closed-loop emulator that accurately predicts brain-machine interface decoder performance
COSYNE 2022
High-level prediction signals cascade through the macaque face-processing hierarchy
COSYNE 2022
High-level prediction signals cascade through the macaque face-processing hierarchy
COSYNE 2022
Learning static and motion cues to material by predicting moving surfaces
COSYNE 2022
Learning static and motion cues to material by predicting moving surfaces
COSYNE 2022
Stabilizing brain-computer interfaces through nonlinear manifold alignment with dynamics
COSYNE 2022
Stabilizing brain-computer interfaces through nonlinear manifold alignment with dynamics
COSYNE 2022
Calcium imaging-based brain-computer interface for investigating long-term neuronal code dynamics
COSYNE 2023
Compact neural representations in co-adaptive Brain-Computer Interfaces
COSYNE 2023
Thoughtful faces: Using facial features to infer naturalistic cognitive processing across species
COSYNE 2023
Activity exploration influences learning speeds in models of brain-computer interfaces
COSYNE 2025
An Anatomical Explanation of the Inverted Face Effect
COSYNE 2025
Cheese3D: sensitive detection and analysis of whole-face movement in mice
COSYNE 2025
Effect of surface material on whisker-surface interaction and mechanosensory neuron responses
COSYNE 2025
Non-invasive brain-machine interface control with artificial intelligence copilots
COSYNE 2025
Towards generalizable, real-time decoders for brain-computer interfaces
COSYNE 2025
Adaptive brain-machine interface learning uses similar neuronal strategies in motor and hippocampal networks
FENS Forum 2024
Anatomically heterogeneous pyramidal cells in supragranular layers of the dorsal cortex show the surface-to-deep firing frequency increase during natural sleep
FENS Forum 2024
BrainTrawler Lite: Navigating through a multi-scale multi-modal gene transcriptomics data resource through a lightweight user interface
FENS Forum 2024
Carbon-based neural interfaces to probe retinal and cortical circuits with functional ultrasound imaging in vivo
FENS Forum 2024
Causal control of spatial navigation by a hippocampal brain-machine interface induces rapid reconfiguration of cognitive maps
FENS Forum 2024
Characterization of the transcriptional landscape of endogenous retroviruses at the fetal-maternal interface in a mouse model of autism spectrum disorder
FENS Forum 2024
Comparison of learning effects between on-demand and face-to-face classes from the viewpoint of brain activity
FENS Forum 2024
Cortical layer-specific repetition suppression to faces in the fusiform face area
FENS Forum 2024
Development of a next-generation bidirectional neurobiohybrid interface with optimized energy efficiency enabling real-time adaptive neuromodulation
FENS Forum 2024
Dieckol as a novel neuroprotective candidate with cognition improvement and multifaceted mechanisms in Alzheimer's disease mouse model
FENS Forum 2024
An event-based data compressive telemetry for high-bandwidth intracortical brain-computer interfaces
FENS Forum 2024
LTP at excitatory synapses onto inhibitory interneurons in the hippocampus depends on AMPA receptor surface mobility
FENS Forum 2024
A graphic user interface for identification and characterization of neuronal ensembles in two-photon calcium imaging recordings
FENS Forum 2024
Improved neuronal surface detection of α2δ proteins by nanobody immunolabeling
FENS Forum 2024
Inter-brain synchronization in face-to-face and online group communication
FENS Forum 2024
Investigating design parameters for improved tissue integration in brain-computer-interface technology
FENS Forum 2024
Involvement of tyrosine kinase Pyk2 in synaptotoxicity associated with Alzheimer’s disease: A protein at the interface of amyloid and Tau pathologies
FENS Forum 2024
The Janus faces of nanoparticles at the neurovascular unit: A double-edged sword in neurodegeneration
FENS Forum 2024