Topic spotlight
TopicWorld Wide

faces

Discover seminars, jobs, and research tagged with faces across World Wide.
78 curated items60 Seminars18 ePosters
Updated 6 months ago
78 items · faces
78 results
SeminarOpen Source

Open SPM: A Modular Framework for Scanning Probe Microscopy

Marcos Penedo Garcia
Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland
Jun 23, 2025

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

SeminarNeuroscience

“Development and application of gaze control models for active perception”

Prof. Bert Shi
Professor of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST)
Jun 11, 2025

Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.

SeminarPsychology

Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake

Casey Becker
University of Pittsburgh
Apr 15, 2025

Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.

SeminarNeuroscience

SWEBAGS conference 2024: Shared network mechanisms of dopamine and deep brain stimulation for the treatment of Parkinson’s disease: From modulation of oscillatory cortex – basal ganglia communication to intelligent clinical brain computer interfaces

Wolf-Julian Neumann
Charité – Universitätsmedizin Berlin
Dec 4, 2024
SeminarNeuroscience

Imagining and seeing: two faces of prosopagnosia

Jason Barton
University of British Columbia
Nov 4, 2024
SeminarPsychology

Face matching and decision making: The influence of framing, task presentation and criterion placement

Kristen Baker
University of Kent
Sep 29, 2024

Many situations rely on the accurate identification of people with whom we are unfamiliar. For example, security at airports or in police investigations require the identification of individuals from photo-ID. Yet, the identification of unfamiliar faces is error prone, even for practitioners who routinely perform this task. Indeed, even training protocols often yield no discernible improvement. The challenge of unfamiliar face identification is often thought of as a perceptual problem; however, this assumption ignores the potential role of decision-making and its contributing factors (e.g., criterion placement). In this talk, I am going to present a series of experiments that investigate the role of decision-making in face identification.

SeminarOpen Source

A modular, free and open source graphical interface for visualizing and processing electrophysiological signals in real-time

David Baum
Research Engineer at InteraXon
May 27, 2024

Portable biosensors become more popular every year. In this context, I propose NeuriGUI, a modular and cross-platform graphical interface that connects to those biosensors for real-time processing, exploring and storing of electrophysiological signals. The NeuriGUI acts as a common entry point in brain-computer interfaces, making it possible to plug in downstream third-party applications for real-time analysis of the incoming signal. NeuriGUI is 100% free and open source.

SeminarPsychology

Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy

Heather Flowe
University of Birmingham
Apr 21, 2024

In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.

SeminarNeuroscience

Stability of visual processing in passive and active vision

Tobias Rose
Institute of Experimental Epileptology and Cognition Research University of Bonn Medical Center
Mar 27, 2024

The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.

SeminarNeuroscience

Modeling idiosyncratic evaluation of faces

Alexander Todorov
University of Chicago
Mar 25, 2024
SeminarPsychology

Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making

Izzy Wisher
Aarhus University
Feb 25, 2024

How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.

SeminarNeuroscienceRecording

Deepfake Detection in Super-Recognizers and Police Officers

Meike Ramon
University of Lausanne
Feb 12, 2024

Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.

SeminarNeuroscienceRecording

Recognizing Faces: Insights from Group and Individual Differences

Catherine Mondloch
Brock University
Jan 22, 2024
SeminarNeuroscience

Towards Human Systems Biology of Sleep/Wake Cycles: Phosphorylation Hypothesis of Sleep

Hiroki R. Ueda
Graduate School of Medicine, University of Tokyo
Jan 14, 2024

The field of human biology faces three major technological challenges. Firstly, the causation problem is difficult to address in humans compared to model animals. Secondly, the complexity problem arises due to the lack of a comprehensive cell atlas for the human body, despite its cellular composition. Lastly, the heterogeneity problem arises from significant variations in both genetic and environmental factors among individuals. To tackle these challenges, we have developed innovative approaches. These include 1) mammalian next-generation genetics, such as Triple CRISPR for knockout (KO) mice and ES mice for knock-in (KI) mice, which enables causation studies without traditional breeding methods; 2) whole-body/brain cell profiling techniques, such as CUBIC, to unravel the complexity of cellular composition; and 3) accurate and user-friendly technologies for measuring sleep and awake states, exemplified by ACCEL, to facilitate the monitoring of fundamental brain states in real-world settings and thus address heterogeneity in human.

SeminarPsychology

Investigating face processing impairments in Developmental Prosopagnosia: Insights from behavioural tasks and lived experience

Judith Lowes
University of Stirling
Nov 13, 2023

The defining characteristic of development prosopagnosia is severe difficulty recognising familiar faces in everyday life. Numerous studies have reported that the condition is highly heterogeneous in terms of both presentation and severity with many mixed findings in the literature. I will present behavioural data from a large face processing test battery (n = 24 DPs) as well as some early findings from a larger survey of the lived experience of individuals with DP and discuss how insights from individuals' real-world experience can help to understand and interpret lab-based data.

SeminarNeuroscience

Vocal emotion perception at millisecond speed

Ana Pinehiro
University of Lisbon
Oct 16, 2023

The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.

SeminarPsychology

The contribution of mental face representations to individual face processing abilities

Linda Ficco
Friedrich-Schilller Universität Jena
Sep 18, 2023

People largely differ with respect to how well they can learn, memorize, and perceive faces. In this talk, I address two potential sources of variation. One factor might be people’s ability to adapt their perception to the kind of faces they are currently exposed to. For instance, some studies report that those who show larger adaptation effects are also better at performing face learning and memory tasks. Another factor might be people’s sensitivity to perceive fine differences between similar-looking faces. In fact, one study shows that the brain of good performers in a face memory task shows larger neural differences between similar-looking faces. Capitalizing on this body of evidence, I present a behavioural study where I explore the relationship between people’s perceptual adaptability and sensitivity and their individual face processing performance.

SeminarNeuroscienceRecording

Feedback control in the nervous system: from cells and circuits to behaviour

Timothy O'Leary
Department of Engineering, University of Cambridge
May 15, 2023

The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.

SeminarPsychology

Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum

Léon Franzen
University of Luebeck
Apr 25, 2023

Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.

SeminarNeuroscience

Learning to see stuff

Roland W. Fleming
Giessen University
Mar 12, 2023

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.

SeminarPsychology

Automated generation of face stimuli: Alignment, features and face spaces

Carl Gaspar
Zayed University (UAE)
Jan 31, 2023

I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.

SeminarPsychology

The Effects of Negative Emotions on Mental Representation of Faces

Fabiana Lombardi
University of Winchester
Nov 22, 2022

Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.

SeminarNeuroscienceRecording

Representations of people in the brain

Lucia Garrido
City, University of London
Nov 21, 2022

Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).

SeminarNeuroscienceRecording

What do neurons want?

Gabriel Kreiman
Harvard
Oct 24, 2022
SeminarPhysics of LifeRecording

Odd dynamics of living chiral crystals

Alexander Mietke
MIT
Aug 14, 2022

The emergent dynamics exhibited by collections of living organisms often shows signatures of symmetries that are broken at the single-organism level. At the same time, organism development itself encompasses a well-coordinated sequence of symmetry breaking events that successively transform a single, nearly isotropic cell into an animal with well-defined body axis and various anatomical asymmetries. Combining these key aspects of collective phenomena and embryonic development, we describe here the spontaneous formation of hydrodynamically stabilized active crystals made of hundreds of starfish embryos that gather during early development near fluid surfaces. We describe a minimal hydrodynamic theory that is fully parameterized by experimental measurements of microscopic interactions among embryos. Using this theory, we can quantitatively describe the stability, formation and rotation of crystals and rationalize the emergence of mechanical properties that carry signatures of an odd elastic material. Our work thereby quantitatively connects developmental symmetry breaking events on the single-embryo level with remarkable macroscopic material properties of a novel living chiral crystal system.

SeminarNeuroscience

New Insights into the Neural Machinery of Face Recognition

Winrich Freiwald
Rockefeller
Jul 11, 2022
SeminarPsychology

The role of top-down mechanisms in gaze perception

Nicolas Burra
University of Geneva
Jun 26, 2022

Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.

SeminarPhysics of LifeRecording

Membrane mechanics meet minimal manifolds

Leroy Jia
Flatiron Institute
Jun 19, 2022

Changes in the geometry and topology of self-assembled membranes underlie diverse processes across cellular biology and engineering. Similar to lipid bilayers, monolayer colloidal membranes studied by the Sharma (IISc Bangalore) and Dogic (UCSB) Labs have in-plane fluid-like dynamics and out-of-plane bending elasticity, but their open edges and micron length scale provide a tractable system to study the equilibrium energetics and dynamic pathways of membrane assembly and reconfiguration. First, we discuss how doping colloidal membranes with short miscible rods transforms disk-shaped membranes into saddle-shaped minimal surfaces with complex edge structures. Theoretical modeling demonstrates that their formation is driven by increasing positive Gaussian modulus, which in turn is controlled by the fraction of short rods. Further coalescence of saddle-shaped surfaces leads to exotic topologically distinct structures, including shapes similar to catenoids, tri-noids, four-noids, and higher order structures. We then mathematically explore the mechanics of these catenoid-like structures subject to an external axial force and elucidate their intimate connection to two problems whose solutions date back to Euler: the shape of an area-minimizing soap film and the buckling of a slender rod under compression. A perturbation theory argument directly relates the tensions of membranes to the stability properties of minimal surfaces. We also investigate the effects of including a Gaussian curvature modulus, which, for small enough membranes, causes the axial force to diverge as the ring separation approaches its maximal value.

SeminarNeuroscience

Chemistry of the adaptive mind: lessons from dopamine

Roshan Cools, PhD
Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Department of ...
Jun 13, 2022

The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.

SeminarNeuroscience

Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy

Bipin Indurkhya
Cognitive Science Department, Jagiellonian University, Kraków
May 18, 2022

In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. This talk will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. Then we will look at some recent robots and chatbots with Eliza-like interfaces and examine their benefits as well as various ethical issues raised by deploying such systems.

SeminarNeuroscienceRecording

Genetic-based brain machine interfaces for visual restoration

Serge Picaud
Institute Vision Paris
Apr 12, 2022

Visual restoration is certainly the greatest challenge for brain-machine interfaces with the high pixel number and high refreshing rate. In the recent year, we brought retinal prostheses and optogenetic therapy up to successful clinical trials. Concerning visual restoration at the cortical level, prostheses have shown efficacy for limited periods of time and limited pixel numbers. We are investigating the potential of sonogenetics to develop a non-contact brain machine interface allowing long-lasting activation of the visual cortex. The presentation will introduce our genetic-based brain machine interfaces for visual restoration at the retinal and cortical levels.

SeminarOpen SourceRecording

Mesmerize: A blueprint for shareable and reproducible analysis of calcium imaging data

Kushal Kolar
University of North Carolina at Chapel Hill
Apr 5, 2022

Mesmerize is a platform for the annotation and analysis of neuronal calcium imaging data. Mesmerize encompasses the entire process of calcium imaging analysis from raw data to interactive visualizations. Mesmerize allows you to create FAIR-functionally linked datasets that are easy to share. The analysis tools are applicable for a broad range of biological experiments and come with GUI interfaces that can be used without requiring a programming background.

SeminarPsychology

Untitled Seminar

Christel Devue
University of Liege
Mar 30, 2022

The nature of facial information that is stored by humans to recognise large amounts of faces is unclear despite decades of research in the field. To complicate matters further, little is known about how representations may evolve as novel faces become familiar, and there are large individual differences in the ability to recognise faces. I will present a theory I am developing and that assumes that facial representations are cost-efficient. In that framework, individual facial representations would incorporate different diagnostic features in different faces, regardless of familiarity, and would evolve depending on the relative stability in appearance over time. Further, coarse information would be prioritised over fine details in order to decrease storage demands. This would create low-cost facial representations that refine over time if appearance changes. Individual differences could partly rest on that ability to refine representation if needed. I will present data collected in the general population and in participants with developmental prosopagnosia. In support of the proposed view, typical observers and those with developmental prosopagnosia seem to rely on coarse peripheral features when they have no reason to expect someone’s appearance will change in the future.

SeminarPsychology

Identity-Expression Ambiguity in 3D Morphable Face Models

Bernhard Egger
Friedrich-Alexander-Universität Erlangen-Nürnberg
Mar 16, 2022

3D Morphable Models are my favorite class of generative models and are commonly used to model faces. They are typically applied to ill-posed problems such as 3D reconstruction from 2D data. I'll start my presentation with an introduction into 3D Morphable Models and show what they are capable of doing. I'll then focus on our recent finding, the Identity-Expression Ambiguity: We demonstrate that non-orthogonality of the variation in identity and expression can cause identity-expression ambiguity in 3D Morphable Models, and that in practice expression and identity are far from orthogonal and can explain each other surprisingly well. Whilst previously reported ambiguities only arise in an inverse rendering setting, identity-expression ambiguity emerges in the 3D shape generation process itself. The goal of this presentation is to demonstrate the ambiguity and discuss its potential consequences in a computer vision setting as well as for understanding face perception mechanisms in the human brain.

SeminarNeuroscience

Deception, ExoNETs, SmushWare & Organic Data: Tech-facilitated neurorehabilitation & human-machine training

James Patton
University of Illinois at Chicago, Shirley Ryan Ability Lab
Feb 21, 2022

Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity.  Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, moving the "smarts" of software into the hardware, and examining clinical effectiveness

SeminarNeuroscienceRecording

Face Pareidolia: biases and the brain

Susan Wardle
NIMH
Jan 31, 2022
SeminarPsychology

Commonly used face cognition tests yield low reliability and inconsistent performance: Implications for test design, analysis, and interpretation of individual differences data

Anna Bobak & Alex Jones
University of Stirling & Swansea University
Jan 19, 2022

Unfamiliar face processing (face cognition) ability varies considerably in the general population. However, the means of its assessment are not standardised, and selected laboratory tests vary between studies. It is also unclear whether 1) the most commonly employed tests are reliable, 2) participants show a degree of consistency in their performance, 3) and the face cognition tests broadly measure one underlying ability, akin to general intelligence. In this study, we asked participants to perform eight tests frequently employed in the individual differences literature. We examined the reliability of these tests, relationships between them, consistency in participants’ performance, and used data driven approaches to determine factors underpinning performance. Overall, our findings suggest that the reliability of these tests is poor to moderate, the correlations between them are weak, the consistency in participant performance across tasks is low and that performance can be broadly split into two factors: telling faces together, and telling faces apart. We recommend that future studies adjust analyses to account for stimuli (face images) and participants as random factors, routinely assess reliability, and that newly developed tests of face cognition are examined in the context of convergent validity with other commonly used measures of face cognition ability.

SeminarNeuroscience

Adaptive Deep Brain Stimulation: Investigational System Development at the Edge of Clinical Brain Computer Interfacing

Jeffrey Herron
University of Washington
Dec 15, 2021

Over the last few decades, the use of deep brain stimulation (DBS) to improve the treatment of those with neurological movement disorders represents a critical success story in the development of invasive neurotechnology and the promise of brain-computer interfaces (BCI) to improve the lives of those suffering from incurable neurological disorders. In the last decade, investigational devices capable of recording and streaming neural activity from chronically implanted therapeutic electrodes has supercharged research into clinical applications of BCI, enabling in-human studies investigating the use of adaptive stimulation algorithms to further enhance therapeutic outcomes and improve future device performance. In this talk, Dr. Herron will review ongoing clinical research efforts in the field of adaptive DBS systems and algorithms. This will include an overview of DBS in current clinical practice, the development of bidirectional clinical-use research platforms, ongoing algorithm evaluation efforts, a discussion of current adoption barriers to be addressed in future work.

SeminarNeuroscienceRecording

Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia

Galia Avidan
Ben Gurion U
Dec 13, 2021
SeminarNeuroscienceRecording

NMC4 Short Talk: Decoding finger movements from human posterior parietal cortex

Charles Guan
California Institute of Technology
Dec 1, 2021

Restoring hand function is a top priority for individuals with tetraplegia. This challenge motivates considerable research on brain-computer interfaces (BCIs), which bypass damaged neural pathways to control paralyzed or prosthetic limbs. Here, we demonstrate the BCI control of a prosthetic hand using intracortical recordings from the posterior parietal cortex (PPC). As part of an ongoing clinical trial, two participants with cervical spinal cord injury were each implanted with a 96-channel array in the left PPC. Across four sessions each, we recorded neural activity while they attempted to press individual fingers of the contralateral (right) hand. Single neurons modulated selectively for different finger movements. Offline, we accurately classified finger movements from neural firing rates using linear discriminant analysis (LDA) with cross-validation (accuracy = 90%; chance = 17%). Finally, the participants used the neural classifier online to control all five fingers of a BCI hand. Online control accuracy (86%; chance = 17%) exceeded previous state-of-the-art finger BCIs. Furthermore, offline, we could classify both flexion and extension of the right fingers, as well as flexion of all ten fingers. Our results indicate that neural recordings from PPC can be used to control prosthetic fingers, which may help contribute to a hand restoration strategy for people with tetraplegia.

SeminarNeuroscienceRecording

NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity

Meenakshi Khosla
Massachusetts Institute of Technology
Nov 30, 2021

Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.

SeminarNeuroscience

Neural network models of binocular depth perception

Paul Hibbard
University of Essex
Nov 30, 2021

Our visual experience of living in a three-dimensional world is created from the information contained in the two-dimensional images projected into our eyes. The overlapping visual fields of the two eyes mean that their images are highly correlated, and that the small differences that are present represent an important cue to depth. Binocular neurons encode this information in a way that both maximises efficiency and optimises disparity tuning for the depth structures that are found in our natural environment. Neural network models provide a clear account of how these binocular neurons encode the local binocular disparity in images. These models can be expanded to multi-layer models that are sensitive to salient features of scenes, such as the orientations and discontinuities between surfaces. These deep neural network models have also shown the importance of binocular disparity for the segmentation of images into separate objects, in addition to the estimation of distance. These results demonstrate the usefulness of machine learning approaches as a tool for understanding biological vision.

SeminarNeuroscience

Advancing Brain-Computer Interfaces by adopting a neural population approach

Juan Alvaro Gallego
Imperial College London
Nov 29, 2021

Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.

SeminarOpen SourceRecording

GuPPy, a Python toolbox for the analysis of fiber photometry data

Talia Lerner
Northwestern University
Nov 23, 2021

Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.

SeminarPsychology

Consistency of Face Identity Processing: Basic & Translational Research

Jeffrey Nador
University of Fribourg
Nov 17, 2021

Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.

SeminarNeuroscience

Learning to see Stuff

Kate Storrs
Justus Liebig University Giessen
Oct 26, 2021

Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.

SeminarNeuroscienceRecording

Interactions between visual cortical neurons that give rise to conscious perception

Pieter Roelfsema
Netherlands Institute for Neuroscience
Oct 24, 2021

I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory and can be reported. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual brain. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.

SeminarNeuroscience

What Art can tell us about the Brain

Margaret Livingstone
Harvard
Oct 4, 2021

Artists have been doing experiments on vision longer than neurobiologists. Some major works of art have provided insights as to how we see; some of these insights are so undamental that they can be understood in terms of the underlying neurobiology. For example, artists have long realized that color and luminance can play independent roles in visual perception. Picasso said, "Colors are only symbols. Reality is to be found in luminance alone." This observation has a parallel in the functional subdivision of our visual systems, where color and luminance are processed by the evolutionarily newer, primate-specific What system, and the older, colorblind, Where (or How) system. Many techniques developed over the centuries by artists can be understood in terms of the parallel organization of our visual systems. I will explore how the segregation of color and luminance processing are the basis for why some Impressionist paintings seem to shimmer, why some op art paintings seem to move, some principles of Matisse's use of color, and how the Impressionists painted "air". Central and peripheral vision are distinct, and I will show how the differences in resolution across our visual field make the Mona Lisa's smile elusive, and produce a dynamic illusion in Pointillist paintings, Chuck Close paintings, and photomosaics. I will explore how artists have figured out important features about how our brains extract relevant information about faces and objects, and I will discuss why learning disabilities may be associated with artistic talent.

SeminarOpen SourceRecording

Autopilot v0.4.0 - Distributing development of a distributed experimental framework

Jonny Saunders
University of Oregon
Sep 28, 2021

Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.

SeminarNeuroscience

Brain-Machine Interfaces: Beyond Decoding

José del R. Millán
University of Texas at Austin
Sep 15, 2021

A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device. In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.

SeminarNeuroscienceRecording

Multisensory speech perception

Michael Beauchamp
University of Pennsylvania
Sep 15, 2021
SeminarPhysics of LifeRecording

Theory of activity-powered interface

Zhihong You
University of California, Santa Barbara
Aug 29, 2021

Interfaces and membranes are ubiquitous in cellular systems across various scales. From lipid membranes to the interfaces of biomolecular condensates inside the cell, these borders not only protect and segregate the inner components from the outside world, but also are actively participating in mechanical regulation and biochemical reaction of the cell. Being part of a living system, these interfaces (membranes) are usually active and away from equilibrium. Yet, it's still not clear how activity can tweak their equilibrium dynamics. Here, I will introduce a model system to tackle this problem. We put together a passive fluid and an active nematics, and study the behavior of this liquid-liquid interface. Whereas thermal fluctuation of such an interface is too weak to be observed, active stress can easily force the interface to fluctuate, overhang, and even break up. In the presence of a wall, the active phase exhibits superfluid-like behavior: it can climb up walls -- a phenomenon we call activity-induced wetting. I will show how to formulate theories to capture these phenomena, highlighting the nontrivial effects of active stress. Our work not only demonstrates that activity can introduce interesting features to an interface, but also sheds light on controlling interfacial properties using activity.

SeminarPsychology

Statistical Summary Representations in Identity Learning: Exemplar-Independent Incidental Recognition

Yaren Koca
University of Regina
Aug 25, 2021

The literature suggests that ensemble coding, the ability to represent the gist of sets, may be an underlying mechanism for becoming familiar with newly encountered faces. This phenomenon was investigated by introducing a new training paradigm that involves incidental learning of target identities interspersed among distractors. The effectiveness of this training paradigm was explored in Study 1, which revealed that unfamiliar observers who learned the faces incidentally performed just as well as the observers who were instructed to learn the faces, and the intervening distractors did not disrupt familiarization. Using the same training paradigm, ensemble coding was investigated as an underlying mechanism for face familiarization in Study 2 by measuring familiarity with the targets at different time points using average images created either by seen or unseen encounters of the target. The results revealed that observers whose familiarity was tested using seen averages outperformed the observers who were tested using unseen averages, however, this discrepancy diminished over time. In other words, successful recognition of the target faces became less reliant on the previously encountered exemplars over time, suggesting an exemplar-independent representation that is likely achieved through ensemble coding. Taken together, the results from the current experiment provide direct evidence for ensemble coding as a viable underlying mechanism for face familiarization, that faces that are interspersed among distractors can be learned incidentally.

SeminarPhysics of Life

Active recognition at immune cell interfaces

Shenshen Wang
UC Los Angeles
Aug 12, 2021
SeminarPsychology

Exploring perceptual similarity and its relation to image-based spaces: an effect of familiarity

Rosyl Somai
University of Stirling
Aug 11, 2021

One challenge in exploring the internal representation of faces is the lack of controlled stimuli transformations. Researchers are often limited to verbalizable transformations in the creation of a dataset. An alternative approach to verbalization for interpretability is finding image-based measures that allow us to quantify image transformations. In this study, we explore whether PCA could be used to create controlled transformations to a face by testing the effect of these transformations on human perceptual similarity and on computational differences in Gabor, Pixel and DNN spaces. We found that perceptual similarity and the three image-based spaces are linearly related, almost perfectly in the case of the DNN, with a correlation of 0.94. This provides a controlled way to alter the appearance of a face. In experiment 2, the effect of familiarity on the perception of multidimensional transformations was explored. Our findings show that there is a positive relationship between the number of components transformed and both the perceptual similarity and the same three image-based spaces used in experiment 1. Furthermore, we found that familiar faces are rated more similar overall than unfamiliar faces. That is, a change to a familiar face is perceived as making less difference than the exact same change to an unfamiliar face. The ability to quantify, and thus control, these transformations is a powerful tool in exploring the factors that mediate a change in perceived identity.

SeminarPsychology

Characterising the brain representations behind variations in real-world visual behaviour

Simon Faghel-Soubeyrand
Université de Montréal
Aug 4, 2021

Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.

SeminarNeuroscienceRecording

Face distortions as a window into face perception

Brad Duchaine
Dartmouth
Aug 2, 2021

Prosopometamorphopsia (PMO) is a disorder characterized by face perception distortions. People with PMO see facial features that appear to melt, stretch, and change size and position. I'll discuss research on PMO carried out by my lab and others that sheds light on the cognitive and neural organization of face perception. https://facedistortion.faceblind.org/

SeminarPhysics of Life

Coordinated motion of active filaments on spherical surfaces

Eric Keaveny
Imperial College London
Jul 6, 2021

Filaments (slender, microscopic elastic bodies) are prevalent in biological and industrial settings. In the biological case, the filaments are often active, in that they are driven internally by motor proteins, with the prime examples being cilia and flagella. For cilia in particular, which can appear in dense arrays, their resulting motions are coupled through the surrounding fluid, as well as through surfaces to which they are attached. In this talk, I present numerical simulations exploring the coordinated motion of active filaments and how it depends on the driving force, density of filaments, as well as the attached surface. In particular, we find that when the surface is spherical, its topology introduces local defects in coordinated motion which can then feedback and alter the global state. This is particularly true when the surface is not held fixed and is free to move in the surrounding fluid. These simulations take advantage of a computational framework we developed for fully 3D filament motion that combines unit quaternions, implicit geometric time integration, quasi-Newton methods, and fast, matrix-free methods for hydrodynamic interactions and it will also be presented.

SeminarNeuroscienceRecording

What are you looking at? Adventures in human gaze behaviour

Benjamin De Haas
Giessen University
Jun 28, 2021
ePoster

Adaptive brain-computer interfaces based on error-related potentials and reinforcement learning

Aline Xavier Fidencio, Christian Klaes, Ioannis Iossifidis

Bernstein Conference 2024

ePoster

Identifying cortical learning algorithms using Brain-Machine Interfaces

Sofia Pereira da Silva, Denis Alevi, Friedrich Schuessler, Henning Sprekeler

Bernstein Conference 2024

ePoster

Learning static and motion cues to material by predicting moving surfaces

COSYNE 2022

ePoster

Learning static and motion cues to material by predicting moving surfaces

COSYNE 2022

ePoster

Stabilizing brain-computer interfaces through nonlinear manifold alignment with dynamics

COSYNE 2022

ePoster

Stabilizing brain-computer interfaces through nonlinear manifold alignment with dynamics

COSYNE 2022

ePoster

Compact neural representations in co-adaptive Brain-Computer Interfaces

Pavithra Rajeswaran, Alexandre Payeur, Guillaume Lajoie, Amy L. Orsborn

COSYNE 2023

ePoster

Thoughtful faces: Using facial features to infer naturalistic cognitive processing across species

Alejandro Tlaie Boria, Katharine Shapcott, Muad Abd el Hay, Berkutay Mert, Pierre-Antoine Ferracci, Robert Taylor, Iuliia Glukhova, Martha Nari Havenith, Marieke Schölvinck

COSYNE 2023

ePoster

Activity exploration influences learning speeds in models of brain-computer interfaces

Stefan Mihalas, Matthew Bull, Jacob Sacks, Marton Rozsa, Christina Wang, Karel Svoboda, Matthew Golub, Kyle Aitken, Kayvon Daie

COSYNE 2025

ePoster

Towards generalizable, real-time decoders for brain-computer interfaces

Avery Hee-Woon Ryoo, Nanda H Krishna, Ximeng Mao, Matthew G. Perich, Guillaume Lajoie

COSYNE 2025

ePoster

Carbon-based neural interfaces to probe retinal and cortical circuits with functional ultrasound imaging in vivo

Julie Zhang, Eduard Masvidal-Codina, F. Taygun Duvan, Florian Fallegger, Diep Nguyen, Steven Walston, Vi Anh Nguyen, Julie Dégardin, Ruben Goulet, Quénol César, Fabrice Arcizet, Jose A. Garrido, Anton Guimerà-Brunet, Rob C. Wykes, Serge Picaud

FENS Forum 2024

ePoster

Cortical layer-specific repetition suppression to faces in the fusiform face area

Dace Apsvalka, Sung-Mu Lee, Marta Correia, Richard Henson

FENS Forum 2024

ePoster

An event-based data compressive telemetry for high-bandwidth intracortical brain-computer interfaces

Hua-Peng Liaw, Yuming He, Pietro Russo, Marios Gourdouparis, Chengyao Shi, Paul Hueber, Yao-Hong Liu

FENS Forum 2024

ePoster

The Janus faces of nanoparticles at the neurovascular unit: A double-edged sword in neurodegeneration

Giulia Terribile, Sara Di Girolamo, Paolo Spaiardi, Gerardo Biella, Silvia Sesana, Francesca Re, Giulio Alfredo Sancini

FENS Forum 2024

ePoster

Neonatal white matter microstructure predicts attention disengagement from fearful faces at 8 months

Hilyatushalihah Audah, Eeva-Leena Kataja, Tuomo Häkiö, Ashmeet Jolly, Aylin Rosberg, Elmo Pulli, Silja Luotonen, Isabella L. C. Mariani Wigley, Niloofar Hashempour, Ru Li, Elena Vartiainen, Wajiha Bano, Ilkka Suuronen, Harri Merisaari, John D. Lewis, Riika Korja, Saara Nolvi, Linnea Karlsson, Hasse Karlsson, Jetro J. Tuulari

FENS Forum 2024

ePoster

The role of the mean diffusivity of the amygdala in the perception of emotional faces in 8-month-old infants

Niloofar Hashempour, Jetro J. Tuulari, Harri Merisaari, John D. Lewis, Linnea Karlsson, Hasse Karlsson, Eeva-Leena Kataja

FENS Forum 2024

ePoster

Single-unit responses to dynamic salient negative faces in the human medial temporal lobe

Alina Kiseleva, Eva van Gelder, Hennric Jockeit, Johannes Sarntehein, Lukas Imbach, Debora Ledergerber

FENS Forum 2024

ePoster

Sleepless nights, vanishing faces: The effect of sleep deprivation on long-term social recognition memory in mice

Adithya Sarma, Evgeniya Tyumeneva, Junfei Cao, Soraya Smit, Marit Bonne, Fleur Meijer, Jean-Christophe Billeter, Robbert Havekes

FENS Forum 2024