Ecog
ECoG
Pascal Fries
The Fries Lab at the Ernst Strüngmann Institute (ESI) in Frankfurt is looking for an enthusiastic postdoctoral project scientist, who is interested in developing novel approaches to human neurotherapy. Modern non-invasive and invasive electrophysiological techniques provide information about the state of the human brain with high bandwidth and temporal resolution. In addition, wearables assess many relevant physiological parameters. The project scientist will use those recordings to develop personalized neurofeedback-based approaches for neurotherapy. Initially, this will focus on healthy human subjects and non-invasive techniques (MEG and EEG). Subsequently, this can lead to the inclusion of diseased human subjects and invasive techniques (ECoG), in collaboration with clinical partners. The position is not a typical postdoctoral position, because its primary focus is on the development of therapeutic approaches. Yet another important focus will be the publication of the obtained scientific advances. The project scientist will join an existing international team with expertise in human and animal electrophysiology, and will be able to use an outstanding infrastructure. For details on the institute and the lab, see: https://www.esi-frankfurt.de/people/pascalfries/
Prof David Brang
We are seeking a full-time post-doctoral research fellow to study computational and neuroscientific models of perception and cognition. The research fellow will be jointly supervised by Dr. David Brang (https://sites.lsa.umich.edu/brang-lab/) and Zhongming Liu (https://libi.engin.umich.edu). The goal of this collaboration is to build computational models of cognitive and perceptual processes using data combined from electrocorticography (ECoG) and fMRI. The successful applicant will also have freedom to conduct additional research based on their interests, using a variety of methods -- ECoG, fMRI, DTI, lesion mapping, and EEG. The ideal start date is from spring to fall 2021 and the position is expected to last for at least two years, with the possibility of extension for subsequent years. We are also recruiting a Post-Doc for research on multisensory interactions (particularly how vision modulates speech perception) using Cognitive Neuroscience techniques or to help with our large-scale brain tumor collaboration with Shawn Hervey-Jumper at UCSF (https://herveyjumperlab.ucsf.edu). In this latter collaboration we collect iEEG (from ~50 patients/year) and lesion mapping data (from ~150 patients/year) in patients with a brain tumor to study sensory and cognitive functions in patients. The goals of this project are to better understand the physiology of tumors, study causal mechanisms of brain functions, and generalize iEEG/ECoG findings from epilepsy patients to a second patient population.
Drs. David Brang and Zhongming Liu
We are seeking a full-time post-doctoral research fellow to study computational and neuroscientific models of perception and cognition. The research fellow will be jointly supervised by Dr. David Brang (https://sites.lsa.umich.edu/brang-lab/) and Zhongming Liu (https://libi.engin.umich.edu). The goal of this collaboration is to build computational models of cognitive and perceptual processes using data combined from electrocorticography (ECoG) and fMRI. The successful applicant will also have freedom to conduct additional research based on their interests, using a variety of methods -- ECoG, fMRI, DTI, lesion mapping, and EEG. The ideal start date is from spring to fall 2021 and the position is expected to last for at least two years, with the possibility of extension for subsequent years. Interested applicants should email their CV, a cover letter describing their research interests and career goals, and contact information for 2-3 references to Drs. David Brang (djbrang@umich.edu) and Zhongming Liu (zmliu@umich.edu).
Luciano Fadiga, Alessandro D'Ausilio
We are looking for a number of early-career researchers to fill several postdoctoral positions that will open in the coming months at the University of Ferrara (Department of Neuroscience and Rehabilitation) or the Italian Institute of Technology (Center for Translational Neurophysiology). The research group conducts research in the area of Neurophysiology of Speech and Sensorimotor Communication. The group is very well funded and has exclusive access to state-of-the-art laboratories and facilities. The research group has strong collaborations and authorizations to conduct data collections in clinical populations in the areas of neurosurgery, neurology, and psychiatry.
Maxime Carrière
The ERC Advanced Grant “Material Constraints Enabling Human Cognition (MatCo)” at the Freie Universität Berlin aims to build network models of the human brain that mimic neurocognitive processes involved in language, communication and cognition. A main strategy is to use neural network models constrained by neuroanatomical and neurophysiological features of the human brain in order to explain aspects of human cognition. To this end, neural network simulations are performed and evaluated in neurophysiological and neurometabolic experiments. This neurocomputational and experimental research targets novel explanations of human language and cognition on the basis of neurobiological principles. In the MatCo project, 3 positions are currently available: 1 full time position for a Scientific Researcher at the postdoctoral level Fixed-term (until 30.9.2025), Salary Scale 13 TV-L FU ID: WiMi_MatCo100_08-2022, 2 part time positions (65%) for Scientific Researchers at the predoctoral level Fixed-term (until 30.9.2025), Salary Scale 13 TV-L FU ID: WiMi_MatCo65_08-2022
N/A
We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.
Caspar Schwiedrzik
We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.
FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images
Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.
An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition
We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Using Fast Periodic Visual Stimulation to measure cognitive function in dementia
Fast periodic visual stimulation (FPVS) has emerged as a promising tool for assessing cognitive function in individuals with dementia. This technique leverages electroencephalography (EEG) to measure brain responses to rapidly presented visual stimuli, offering a non-invasive and objective method for evaluating a range of cognitive functions. Unlike traditional cognitive assessments, FPVS does not rely on behavioural responses, making it particularly suitable for individuals with cognitive impairment. In this talk I will highlight a series of studies that have demonstrated its ability to detect subtle deficits in recognition memory, visual processing and attention in dementia patients using EEG in the lab, at home and in clinic. The method is quick, cost-effective, and scalable, utilizing widely available EEG technology. FPVS holds significant potential as a functional biomarker for early diagnosis and monitoring of dementia, paving the way for timely interventions and improved patient outcomes.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection
I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
The immunopathogenesis of autoimmune seizure disorders
Immune-mediated mechanisms are increasingly recognised as a cause of epilepsy even in the absence of an immune response against a specifical neuronal antigen. In some cases, these autoimmune processes are clearly pathogenic, for example acute seizures in autoimmune encephalitis, whereas in others this is less clear, for example autoimmune-associated epilepsy. Recent research has provided novel insights into the clinical, paraclinical and immunopathogenetic mechanisms in these conditions. I will provide an overview of clinical and paraclinical features of immune-associated seizures. Furthermore, I will describe specific immunopathogenic examples implicating lymphoid follicular autoimmunisation and intrathecal B cells in these conditions. These insights into immunopathogenesis may help to explain the role of current and immunotherapies in these conditions.
Of glia and macrophages, signaling hubs in development and homeostasis
We are interested in the biology of macrophages, which represent the first line of defense against pathogens. In Drosophila, the embryonic hemocytes arise from the mesoderm whereas glial cells arise from multipotent precursors in the neurogenic region. These cell types represent, respectively, the macrophages located outside and within the nervous system (similar to vertebrate microglia). Thus, despite their different origin, hemocytes and glia display common functions. In addition, both cell types express the Glide/Gcm transcription factor, which plays an evolutionarily conserved role as an anti-inflammatory factor. Moreover, embryonic hemocytes play an evolutionarily conserved and fundamental role in development. The ability to migrate and to contact different tissues/organs most likely allow macrophages to function as signaling hubs. The function of macrophages beyond the recognition of the non-self calls for revisiting the biology of these heterogeneous and plastic cells in physiological and pathological conditions across evolution.
Deepfake Detection in Super-Recognizers and Police Officers
Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.
Recognizing Faces: Insights from Group and Individual Differences
Investigating face processing impairments in Developmental Prosopagnosia: Insights from behavioural tasks and lived experience
The defining characteristic of development prosopagnosia is severe difficulty recognising familiar faces in everyday life. Numerous studies have reported that the condition is highly heterogeneous in terms of both presentation and severity with many mixed findings in the literature. I will present behavioural data from a large face processing test battery (n = 24 DPs) as well as some early findings from a larger survey of the lived experience of individuals with DP and discuss how insights from individuals' real-world experience can help to understand and interpret lab-based data.
Decoding mental conflict between reward and curiosity in decision-making
Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Prosody in the voice, face, and hands changes which words you hear
Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.
Microbial modulation of zebrafish behavior and brain development
There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
Diagnosing dementia using Fastball neurocognitive assessment
Fastball is a novel, fast, passive biomarker of cognitive function, that uses cheap, scalable electroencephalography (EEG) technology. It is sensitive to early dementia; language, education, effort and anxiety independent and can be used in any setting including patients’ homes. It can capture a range of cognitive functions including semantic memory, recognition memory, attention and visual function. We have shown that Fastball is sensitive to cognitive dysfunction in Alzheimer’s disease and Mild Cognitive Impairment, with data collected in patients’ homes using low-cost portable EEG. We are now preparing for significant scale-up and the validation of Fastball in primary and secondary care.
Understanding and Mitigating Bias in Human & Machine Face Recognition
With the increasing use of automated face recognition (AFR) technologies, it is important to consider whether these systems not only perform accurately, but also equitability or without “bias”. Despite rising public, media, and scientific attention to this issue, the sources of bias in AFR are not fully understood. This talk will explore how human cognitive biases may impact our assessments of performance differentials in AFR systems and our subsequent use of those systems to make decisions. We’ll also show how, if we adjust our definition of what a “biased” AFR algorithm looks like, we may be able to create algorithms that optimize the performance of a human+algorithm team, not simply the algorithm itself.
From cells to systems: multiscale studies of the epileptic brain
It is increasingly recognized that epilepsy affects human brain organization across multiple scales, ranging from cellular alterations in specific regions towards macroscale network imbalances. My talk will overview an emerging paradigm that integrates cellular, neuroimaging, and network modelling approaches to faithful characterize the extent of structural and functional alterations in the common epilepsies. I will also discuss how multiscale framework can help to derive clinically useful biomarkers of dysfunction, and how these methods may guide surgical planning and prognostics.
Off the rails - how pathological patterns of whole brain activity emerge in epileptic seizures
In most brains across the animal kingdom, brain dynamics can enter pathological states that are recognisable as epileptic seizures. Yet usually, brain operate within certain constraints given through neuronal function and synaptic coupling, that will prevent epileptic seizure dynamics from emerging. In this talk, I will bring together different approaches to identifying how networks in the broadest sense shape brain dynamics. Using illustrative examples from intracranial EEG recordings, disorders characterised by molecular disruption of a single neurotransmitter receptor type, to single-cell recordings of whole-brain activity in the larval zebrafish, I will address three key questions - (1) how does the regionally specific composition of synaptic receptors shape ongoing physiological brain activity; (2) how can disruption of this regionally specific balance result in abnormal brain dynamics; and (3) which cellular patterns underly the transition into an epileptic seizure.
Learning to see stuff
Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.
The speaker identification ability of blind and sighted listeners
Previous studies have shown that blind individuals outperform sighted controls in a variety of auditory tasks; however, only few studies have investigated blind listeners’ speaker identification abilities. In addition, existing studies in the area show conflicting results. The presented empirical investigation with 153 blind (74 of them congenitally blind) and 153 sighted listeners is the first of its kind and scale in which long-term memory effects of blind listeners’ speaker identification abilities are examined. For the empirical investigation, all listeners were evenly assigned to one of nine subgroups (3 x 3 design) in order to investigate the influence of two parameters with three levels, respectively, on blind and sighted listeners’ speaker identification performance. The parameters were a) time interval; i.e. a time interval of 1, 3 or 6 weeks between the first exposure to the voice to be recognised (familiarisation) and the speaker identification task (voice lineup); and b) signal quality; i.e. voice recordings were presented in either studio-quality, mobile phone-quality or as recordings of whispered speech. Half of the presented voice lineups were target-present lineups in which the previously heard target voice was included. The other half consisted of target-absent lineups which contained solely distractor voices. Blind individuals outperformed sighted listeners only under studio quality conditions. Furthermore, for blind and sighted listeners no significant performance differences were found with regard to the three investigated time intervals of 1, 3 and 6 weeks. Blind as well as sighted listeners were significantly better at picking the target voice from target-present lineups than at indicating that the target voice was absent in target-absent lineups. Within the blind group, no significant correlations were found between identification performance and onset or duration of blindness. Implications for the field of forensic phonetics are discussed.
What's wrong with the prosopagnosia literature? A new approach to diagnosing and researching the condition
Developmental prosopagnosia is characterised by severe, lifelong difficulties when recognising facial identity. Most researchers require prosopagnosia cases exhibit ultra-conservative levels of impairment on the Cambridge Face Memory Test before they include them in their experiments. This results in the majority of people who believe that they have this condition being excluded from the scientific literature. In this talk I outline the many issues that will afflict prosopagnosia research if this continues, and show that these excluded cases do exhibit impairments on all commonly used diagnostic tests when a group-based method of assessment is utilised. I propose a paradigm shift away from cognitive task-based approaches to diagnosing prosopagnosia, and outline a new way that researchers can investigate this condition.
Analyzing artificial neural networks to understand the brain
In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.
Representations of people in the brain
Faces and voices convey much of the non-verbal information that we use when communicating with other people. We look at faces and listen to voices to recognize others, understand how they are feeling, and decide how to act. Recent research in my lab aims to investigate whether there are similar coding mechanisms to represent faces and voices, and whether there are brain regions that integrate information across the visual and auditory modalities. In the first part of my talk, I will focus on an fMRI study in which we found that a region of the posterior STS exhibits modality-general representations of familiar people that can be similarly driven by someone’s face and their voice (Tsantani et al. 2019). In the second part of the talk, I will describe our recent attempts to shed light on the type of information that is represented in different face-responsive brain regions (Tsantani et al., 2021).
It’s All About Motion: Functional organization of the multisensory motion system at 7T
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will suggest that this mechanism may play a role in evaluating multisensory responses for visual, tactile and auditory stimuli in hMT+.
Training Dynamic Spiking Neural Network via Forward Propagation Through Time
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.
Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity
A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.
Shallow networks run deep: How peripheral preprocessing facilitates odor classification
Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli. Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
Real-world scene perception and search from foveal to peripheral vision
A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Neuroscience of socioeconomic status and poverty: Is it actionable?
SES neuroscience, using imaging and other methods, has revealed generalizations of interest for population neuroscience and the study of individual differences. But beyond its scientific interest, SES is a topic of societal importance. Does neuroscience offer any useful insights for promoting socioeconomic justice and reducing the harms of poverty? In this talk I will use research from my own lab and others’ to argue that SES neuroscience has the potential to contribute to policy in this area, although its application is premature at present. I will also attempt to forecast the ways in which practical solutions to the problems of poverty may emerge from SES neuroscience. Bio: Martha Farah has conducted groundbreaking research on face and object recognition, visual attention, mental imagery, and semantic memory and - in more recent times - has been at the forefront of interdisciplinary research into neuroscience and society. This deals with topics such as using fMRI for lie detection, ethics of cognitive enhancement, and effects of social deprivation on brain development.
New Insights into the Neural Machinery of Face Recognition
Don't forget the gametes: Neurodevelopmental pathogenesis starts in the sperm and egg
Proper development of the nervous system depends not only on the inherited DNA sequence, but also on proper regulation of gene expression, as controlled in part by epigenetic mechanisms present in the parental gametes. In this presentation an internationally recognized research advocate explains why researchers concerned about the origins of increasingly prevalent neurodevelopmental disorders such as autism and attention deficit hyperactivity disorder should look beyond genetics in probing the origins of dysregulated transcription of brain-related genes. The culprit for a subset of cases, she contends, may lie in the exposure history of the parents, and thus their germ cells. To illustrate how environmentally informed, nongenetic dysfunction may occur, she focuses on the example of parents' histories of exposure to common agents of modern inhalational anesthesia, a highly toxic exposure that in mammalian models has been seen to induce heritable neurodevelopmental abnormality in offspring born of exposed germline.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
Unchanging and changing: hardwired taste circuits and their top-down control
The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.
ItsAllAboutMotion: Encoding of speed in the human Middle Temporal cortex
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will show that this mechanism plays a role in evaluating multisensory responses for visual, tactile and auditory motion stimuli in hMT+.
The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules
Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.
Brain and behavioural impacts of early life adversity
Abuse, neglect, and other forms of uncontrollable stress during childhood and early adolescence can lead to adverse outcomes later in life, including especially perturbations in the regulation of mood and emotional states, and specifically anxiety disorders and depression. However, stress experiences vary from one individual to the next, meaning that causal relationships and mechanistic accounts are often difficult to establish in humans. This interdisciplinary talk considers the value of research in experimental animals where stressor experiences can be tightly controlled and detailed investigations of molecular, cellular, and circuit-level mechanisms can be carried out. The talk will focus on the widely used repeated maternal separation procedure in rats where rat offspring are repeatedly separated from maternal care during early postnatal life. This early life stress has remarkably persistent effects on behaviour with a general recognition that maternally-deprived animals are susceptible to depressive-like phenotypes. The validity of this conclusion will be critically appraised with convergent insights from a recent longitudinal study in maternally separated rats involving translational brain imaging, transcriptomics, and behavioural assessment.
Emotions and Partner Phubbing: The Role of Understanding and Validation in Predicting Anger and Loneliness
Interactions between romantic partners may be disturbed by problematic mobile phone use, i.e., phubbing. Research shows that phubbing reduces the ability to be responsive, but emotional aspects of phubbing, such as experiences of anger and loneliness, have not been explored. Anger has been linked to partner blame in negative social interactions, whereas loneliness has been associated with low social acceptance. Moreover, two aspects of partner responsiveness, understanding and validation, refer to the ability to recognize partner’s perspective and convey acceptance of their point of view, respectively. High understanding and validation by partner have been found to prevent from negative affect during social interaction. The impact of understanding and validation on emotions has not been investigated in the context of phubbing, therefore we posit the following exploratory hypotheses. (1) Participants will report higher levels of anger and loneliness on days with phubbing by partner, compared to days without; (2) understanding and validation will moderate the relationship between phubbing intensity and levels of anger and loneliness. We conducted a daily diary study over seven days. Based on a sample of 133 participants in intimate relationships and living with their partners, we analyzed the nested within and between-person data using multilevel models. Participants reported higher levels of anger and loneliness on days they experienced phubbing. Both, understanding and validation, buffer the relationship between phubbing intensity and negative experiences, and the interaction effects indicate certain nuances between the two constructs. Our research provides a unique insight into how specific mechanisms related to couple interactions may explain experiences of anger and loneliness.
Forensic use of face recognition systems for investigation
With the increasing development of automatic systems and artificial intelligence, face recognition is becoming increasingly important in forensic and civil contexts. However, face recognition has yet to be thoroughly empirically studied to provide an adequate scientific and legal framework for investigative and court purposes. This observation sets the foundation for the research. We focus on issues related to face images and the use of automatic systems. Our objective is to validate a likelihood ratio computation methodology for interpreting comparison scores from automatic face recognition systems (score-based likelihood ratio, SLR). We collected three types of traces: portraits (ID), video surveillance footage recorded by ATM and by a wide-angle camera (CCTV). The performance of two automatic face recognition systems is compared: the commercial IDEMIA Morphoface (MFE) system and the open source FaceNet algorithm.
Functional segregation of rostral and caudal hippocampus in associative memory
It has long been established that the hippocampus plays a crucial role for episodic memory. As opposed to the modular approach, now it is generally assumed that being a complex structure, the HC performs multiplex interconnected functions, whose hierarchical organization provides basis for the higher cognitive functions such as semantics-based encoding and retrieval. However, the «where, when and how» properties of distinct memory aspects within and outside the HC are still under debate. Here we used a visual associative memory task as a probe to test the hypothesis about the differential involvement of the rostral and caudal portions of the human hippocampus in memory encoding, recognition and associative recall. In epilepsy patients implanted with stereo-EEG, we show that at retrieval the rostral HC is selectively active for recognition memory, whereas the caudal HC is selectively active for the associative memory. Low frequency desynchronization and high frequency synchronization characterize the temporal dynamic in encoding and retrieval. Therefore, we describe here anatomical segregation in the hippocampal contributions to associative and recognition memory.
Visualization and manipulation of our perception and imagery by BCI
We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.
Untitled Seminar
The nature of facial information that is stored by humans to recognise large amounts of faces is unclear despite decades of research in the field. To complicate matters further, little is known about how representations may evolve as novel faces become familiar, and there are large individual differences in the ability to recognise faces. I will present a theory I am developing and that assumes that facial representations are cost-efficient. In that framework, individual facial representations would incorporate different diagnostic features in different faces, regardless of familiarity, and would evolve depending on the relative stability in appearance over time. Further, coarse information would be prioritised over fine details in order to decrease storage demands. This would create low-cost facial representations that refine over time if appearance changes. Individual differences could partly rest on that ability to refine representation if needed. I will present data collected in the general population and in participants with developmental prosopagnosia. In support of the proposed view, typical observers and those with developmental prosopagnosia seem to rely on coarse peripheral features when they have no reason to expect someone’s appearance will change in the future.
Biopsychosocial pathways in dementia inequalities
In the United States, racial/ethnic inequalities in Alzheimer's disease and related dementias persist even after controlling for socioeconomic factors and physical health. These persistent and unexplained disparities suggest: (1) there are unrecognized dementia risk factors that are socially patterned and/or (2) known dementia risk factors exhibit differential impact across social groups. Pursuing these research directions with data from multiple longitudinal studies of brain and cognitive aging has revealed several challenges to the study of late-life health inequalities, highlighted evidence for both risk and resilience within marginalized communities, and inspired new data collection efforts to advance the field.
Object recognition by touch and other senses
Cross-modality imaging of the neural systems that support executive functions
Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.
A biological model system for studying predictive processing
Despite the increasing recognition of predictive processing in circuit neuroscience, little is known about how it may be implemented in cortical circuits. We set out to develop and characterise a biological model system with layer 5 pyramidal cells in the centre. We aim to gain access to prediction and internal model generating processes by controlling, understanding or monitoring everything else: the sensory environment, feed-forward and feed-back inputs, integrative properties, their spiking activity and output. I’ll show recent work from the lab establishing such a model system both in terms of biology as well as tool development.
Why is the suprachiasmatic nucleus such a brilliant circadian time-keeper?
Circadian clocks dominate our lives. By creating and distributing an internal representation of 24-hour solar time, they prepare us, and thereby adapt us, to the daily and seasonal world. Jet-lag is an obvious indicator of what can go wrong when such adaptation is disrupted acutely. More seriously, the growing prevalence of rotational shift-work which runs counter to our circadian life, is a significant chronic challenge to health, presenting as increased incidence of systemic conditions such as metabolic and cardiovascular disease. Added to this, circadian and sleep disturbances are a recognised feature of various neurological and psychiatric conditions, and in some cases may contribute to disease progression. The “head ganglion” of the circadian system is the suprachiasmatic nucleus (SCN) of the hypothalamus. It synchronises the, literally, innumerable cellular clocks across the body, to each other and to solar time. Isolated in organotypic slice culture, it can maintain precise, high-amplitude circadian cycles of neural activity, effectively, indefinitely, just as it does in vivo. How is this achieved: how does this clock in a dish work? This presentation will consider SCN time-keeping at the level of molecular feedback loops, neuropeptidergic networks and neuron-astrocyte interactions.
Developing a test to assess the ability of Zurich’s police cadets to discriminate, learn and recognize voices
The goal of this pilot study is to develop a test through which people with extraordinary voice recognition and discrimination skills can be found (for forensic purposes). Since interest in this field has emerged, three studies have been published with the goal of finding people with potential super-recognition skills in voice processing. One of them is a discrimination test and two are recognition tests, but neither combines the two test scenarios and their test designs cannot be directly compared to a casework scenario in forensics phonetics. The pilot study at hand attempts to bridge this gap and analyses if the skills of voice discrimination and recognition correlate. The study is guided by a practical, forensic application, which further complicates the process of creating a viable test. The participants for the pilot consist of different classes of police cadets, which means the test can be redone and adjusted over time.
Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision
The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.
Hearing in an acoustically varied world
In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.
What does the primary visual cortex tell us about object recognition?
Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.
Cortex-wide high density ECoG recordings from rat reveal diverse generators of sleep-spindles with characteristic anatomical topographies and non-stationary subcycle dynamics
Bernstein Conference 2024
Non-Human Recognition of Orthography: How is it implemented and how does it differ from Human orthographic processing
Bernstein Conference 2024
Recognizing relevant information in neural activity
Bernstein Conference 2024
Action recognition best explains neural activity in cuneate nucleus
COSYNE 2022
Do better object recognition models improve the generalization gap in neural predictivity?
COSYNE 2022
How many objects can be recognized under all possible views?
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
How many objects can be recognized under all possible views?
COSYNE 2022
Distinct roles of excitatory and inhibitory neurons in the macaque IT cortex in object recognition
COSYNE 2023
Leveraging computational and animal models of vision to probe atypical emotion recognition in autism
COSYNE 2023
On-line SEUDO for real-time cell recognition in Calcium Imaging
COSYNE 2023
Spatial-frequency channels for object recognition by neural networks are twice as wide as those of humans
COSYNE 2023
Temporal pattern recognition in retinal ganglion cells is mediated by dynamical inhibitory synapses
COSYNE 2023
Geometric Signatures of Speech Recognition: Insights from Deep Neural Networks to the Brain
COSYNE 2025
The analysis of the OXT-DA interaction causing social recognition deficit in Syntaxin1A KO
FENS Forum 2024
Behavioral impacts of simulated microgravity on male mice: Locomotion, social interactions and memory in a novel object recognition task
FENS Forum 2024
Cortex-wide high-density ECoG and translaminar local field potential recordings reveal rich broad-band spatio-temporal dynamics
FENS Forum 2024
The cortical amygdala mediates individual recognition in mice
FENS Forum 2024
A deep learning approach for the recognition of behaviors in the forced swim test
FENS Forum 2024
Direct electrical stimulation of the human amygdala enhances recognition memory for objects but not scenes
FENS Forum 2024
Two distinct ways to form long-term object recognition memory during sleep and wakefulness
FENS Forum 2024
Early disruption in social recognition and its impact on episodic memory in triple transgenic mice model of Alzheimer’s disease
FENS Forum 2024
ECoG-based functional mapping of the motor cortex in rhesus monkeys
FENS Forum 2024
Evaluation of novel object recognition test results of rats injected with intracerebroventricular streptozocin to develop Alzheimer's disease models
FENS Forum 2024
GPT-4 can recognize Theory of Mind in natural conversations: fMRI evidence
FENS Forum 2024
HBK-15 rescues recognition memory in MK-801- and stress-induced cognitive impairments in female mice
FENS Forum 2024
Homecage-based unsupervised novel object recognition in mice
FENS Forum 2024
Interaction of sex and sleep on performance at the novel object recognition task in mice
FENS Forum 2024
Investigating the recruitment of parvalbumin and somatostatin interneurons into engrams for associative recognition memory
FENS Forum 2024
Mouse can recognize other individuals: Maternal exposure to dioxin does not affect identification but perturbs the recognition ability of other individuals
FENS Forum 2024
Myoelectric gesture recognition in patients with spinal cord injury using a medium-density EMG system
FENS Forum 2024
Noradrenergic modulation of recognition memory in male and female mice
FENS Forum 2024
The processing of spatial frequencies through time in visual word recognition
FENS Forum 2024
Resonant song recognition in crickets
FENS Forum 2024
Recognition of complex spatial environments showed dimorphic patterns of theta (4-8 Hz) activity
FENS Forum 2024
Robustness and evolvability in a model of a pattern recognition network
FENS Forum 2024
Scent of a memory: Dissecting the vomeronasal-hippocampal axis in social recognition
FENS Forum 2024
Sex-dependent effects of voluntary physical exercise on object recognition memory restoration after traumatic brain injury in middle-aged rats
FENS Forum 2024
Sleepless nights, vanishing faces: The effect of sleep deprivation on long-term social recognition memory in mice
FENS Forum 2024