← Back

Recognition

Topic spotlight
TopicWorld Wide

recognition

Discover seminars, jobs, and research tagged with recognition across World Wide.
98 curated items60 Seminars37 ePosters1 Position
Updated 2 days ago
98 items · recognition
98 results
Position

Kevin Bolding

Monell Chemical Senses Center
Philadelphia, PA
Dec 5, 2025

We are recruiting lab personnel. If systems neuroscience at the intersection of olfaction and memory excites you, now is an excellent time to get in touch. Our goal is to discover fundamental rules and mechanisms that govern information storage and retrieval in neural systems. Our primary focus will be establishing the changes in neural circuit and population dynamics that correspond to odor recognition memory. To bring our understanding of this process to a new level of rigor we will apply quantitative statistical approaches to relate behavioral signatures of odor recognition to activity and plasticity in olfactory circuits. We will use in vivo electrophysiology and calcium imaging to capture the activity of large neural populations during olfactory experience, and we will apply cell-type specific perturbations of activity and plasticity to piece apart how specific circuit connections contribute.

SeminarPsychology

An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition

Tram Nguyen
University of Malta
Jun 10, 2025

We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.

SeminarNeuroscience

Single-neuron correlates of perception and memory in the human medial temporal lobe

Prof. Dr. Dr. Florian Mormann
University of Bonn, Germany
May 13, 2025

The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.

SeminarNeuroscience

Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge

Jorge Almeida
University of Coimbra
Jan 27, 2025

Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.

SeminarPsychology

Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy

Heather Flowe
University of Birmingham
Apr 21, 2024

In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.

SeminarNeuroscience

Decoding mental conflict between reward and curiosity in decision-making

Naoki Honda
Hiroshima University
Jul 9, 2023

Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.

SeminarNeuroscienceRecording

Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness

Sharon Gilad-Gutnick
MIT
Jun 19, 2023

Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.

SeminarCognition

Prosody in the voice, face, and hands changes which words you hear

Hans Rutger Bosker
Donders Institute of Radboud University
May 22, 2023

Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.

SeminarNeuroscience

Microbial modulation of zebrafish behavior and brain development

Judith S. Eisen
University of Oregon
May 15, 2023

There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.

SeminarPsychology

Diagnosing dementia using Fastball neurocognitive assessment

George Stothart
University of Bath
Apr 18, 2023

Fastball is a novel, fast, passive biomarker of cognitive function, that uses cheap, scalable electroencephalography (EEG) technology. It is sensitive to early dementia; language, education, effort and anxiety independent and can be used in any setting including patients’ homes. It can capture a range of cognitive functions including semantic memory, recognition memory, attention and visual function. We have shown that Fastball is sensitive to cognitive dysfunction in Alzheimer’s disease and Mild Cognitive Impairment, with data collected in patients’ homes using low-cost portable EEG. We are now preparing for significant scale-up and the validation of Fastball in primary and secondary care.

SeminarPsychology

Understanding and Mitigating Bias in Human & Machine Face Recognition

John Howard
Maryland Test Facility
Apr 11, 2023

With the increasing use of automated face recognition (AFR) technologies, it is important to consider whether these systems not only perform accurately, but also equitability or without “bias”. Despite rising public, media, and scientific attention to this issue, the sources of bias in AFR are not fully understood. This talk will explore how human cognitive biases may impact our assessments of performance differentials in AFR systems and our subsequent use of those systems to make decisions. We’ll also show how, if we adjust our definition of what a “biased” AFR algorithm looks like, we may be able to create algorithms that optimize the performance of a human+algorithm team, not simply the algorithm itself.

SeminarNeuroscience

Learning to see stuff

Roland W. Fleming
Giessen University
Mar 12, 2023

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.

SeminarNeuroscience

Analyzing artificial neural networks to understand the brain

Grace Lindsay
NYU
Dec 15, 2022

In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 9, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 8, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscienceRecording

Shallow networks run deep: How peripheral preprocessing facilitates odor classification

Yonatan Aljadeff
University of California, San Diego (UCSD)
Nov 8, 2022

Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli. Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu.

SeminarNeuroscience

New Insights into the Neural Machinery of Face Recognition

Winrich Freiwald
Rockefeller
Jul 11, 2022
SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 21, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules

Megan Porter
University of Hawaii
May 1, 2022

Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.

SeminarPsychology

Forensic use of face recognition systems for investigation

Maëlig Jacquet
University of Lausanne
Apr 10, 2022

With the increasing development of automatic systems and artificial intelligence, face recognition is becoming increasingly important in forensic and civil contexts. However, face recognition has yet to be thoroughly empirically studied to provide an adequate scientific and legal framework for investigative and court purposes. This observation sets the foundation for the research. We focus on issues related to face images and the use of automatic systems. Our objective is to validate a likelihood ratio computation methodology for interpreting comparison scores from automatic face recognition systems (score-based likelihood ratio, SLR). We collected three types of traces: portraits (ID), video surveillance footage recorded by ATM and by a wide-angle camera (CCTV). The performance of two automatic face recognition systems is compared: the commercial IDEMIA Morphoface (MFE) system and the open source FaceNet algorithm.

SeminarNeuroscienceRecording

Object recognition by touch and other senses

Roberta Klatzky
Carnegie Mellon University
Mar 2, 2022
SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Feb 28, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscience

A biological model system for studying predictive processing

Ede Rancz
University of Oxford
Feb 23, 2022

Despite the increasing recognition of predictive processing in circuit neuroscience, little is known about how it may be implemented in cortical circuits. We set out to develop and characterise a biological model system with layer 5 pyramidal cells in the centre. We aim to gain access to prediction and internal model generating processes by controlling, understanding or monitoring everything else: the sensory environment, feed-forward and feed-back inputs, integrative properties, their spiking activity and output. I’ll show recent work from the lab establishing such a model system both in terms of biology as well as tool development.

SeminarPsychology

Developing a test to assess the ability of Zurich’s police cadets to discriminate, learn and recognize voices

Andrea Fröhlich
Zurich Forensic Science Institute
Feb 2, 2022

The goal of this pilot study is to develop a test through which people with extraordinary voice recognition and discrimination skills can be found (for forensic purposes). Since interest in this field has emerged, three studies have been published with the goal of finding people with potential super-recognition skills in voice processing. One of them is a discrimination test and two are recognition tests, but neither combines the two test scenarios and their test designs cannot be directly compared to a casework scenario in forensics phonetics. The pilot study at hand attempts to bridge this gap and analyses if the skills of voice discrimination and recognition correlate. The study is guided by a practical, forensic application, which further complicates the process of creating a viable test. The participants for the pilot consist of different classes of police cadets, which means the test can be redone and adjusted over time.

SeminarNeuroscience

Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision

Veeky Baths
Cognitive Neuroscience Lab (Bits Pilani Goa Campus)
Feb 2, 2022

The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.

SeminarNeuroscience

Hearing in an acoustically varied world

Kerry Walker
University of Oxford
Jan 24, 2022

In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 23, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Molecular recognition and the assembly of feature-selective retinal circuits

Arjun Krishnaswamy
Department of Physiology, McGill University
Dec 13, 2021
SeminarNeuroscienceRecording

NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1

Dori Grijseels (they/them)
University of Sussex
Dec 1, 2021

Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.

SeminarNeuroscienceRecording

NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing

Nick Sexton (he/him)
University College London
Nov 30, 2021

A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.

SeminarPsychology

Consistency of Face Identity Processing: Basic & Translational Research

Jeffrey Nador
University of Fribourg
Nov 17, 2021

Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.

SeminarNeuroscienceRecording

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard
Oct 25, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.

SeminarNeuroscienceRecording

Towards a Theory of Human Visual Reasoning

Ekaterina Shurkova
University of Edinburgh
Oct 13, 2021

Many tasks that are easy for humans are difficult for machines. In particular, while humans excel at tasks that require generalising across problems, machine systems notably struggle. One such task that has received a good amount of attention is the Synthetic Visual Reasoning Test (SVRT). The SVRT consists of a range of problems where simple visual stimuli must be categorised into one of two categories based on an unknown rule that must be induced. Conventional machine learning approaches perform well only when trained to categorise based on a single rule and are unable to generalise without extensive additional training to tasks with any additional rules. Multiple theories of higher-level cognition posit that humans solve such tasks using structured relational representations. Specifically, people learn rules based on structured representations that generalise to novel instances quickly and easily. We believe it is possible to model this approach in a single system which learns all the required relational representations from scratch and performs tasks such as SVRT in a single run. Here, we present a system which expands the DORA/LISA architecture and augments the existing model with principally novel components, namely a) visual reasoning based on the established theories of recognition by components; b) the process of learning complex relational representations by synthesis (in addition to learning by analysis). The proposed augmented model matches human behaviour on SVRT problems. Moreover, the proposed system stands as perhaps a more realistic account of human cognition, wherein rather than using tools that has been shown successful in the machine learning field to inform psychological theorising, we use established psychological theories to inform developing a machine system.

SeminarNeuroscienceRecording

Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise

Monty Escabi
University of Connecticut
Sep 30, 2021

Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.

SeminarNeuroscienceRecording

Seeing with technology: Exchanging the senses with sensory substitution and augmentation

Michael Proulx
University of Bath
Sep 29, 2021

What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.

SeminarNeuroscience

Themes and Variations: Circuit mechanisms of behavioral evolution

Vanessa Ruta
The Rockefeller University, New York, USA
Sep 28, 2021

Animals exhibit extraordinary variation in their behavior, yet little is known about the neural mechanisms that generate this diversity. My lab has been taking advantage of the rapid diversification of male courtship behaviors in Drosophila to glean insight into how evolution shapes the nervous system to generate species-specific behaviors. By translating neurogenetic tools from D. melanogaster to closely related Drosophila species, we have begun to directly compare the homologous neural circuits and pinpoint sites of adaptive change. Across species, P1 neurons serve as a conserved node in regulating male courtship: these neurons are selectively activated by the sensory cues indicative of an appropriate mate and their activation triggers enduring courtship displays. We have been examining how different sensory pathways converge onto P1 neurons to regulate a male’s state of arousal, honing his pursuit of a prospective partner. Moreover, by performing cross-species comparison of these circuits, we have begun to gain insight into how reweighting of sensory inputs to P1 neurons underlies species-specific mate recognition. Our results suggest how variation at flexible nodes within the nervous system can serve as a substrate for behavioral evolution, shedding light on the types of changes that are possible and preferable within brain circuits.

SeminarNeuroscienceRecording

Analogical Reasoning Plus: Why Dissimilarities Matter

Patricia A. Alexander
University of Maryland
Sep 22, 2021

Analogical reasoning remains foundational to the human ability to forge meaningful patterns within the sea of information that continually inundates the senses. Yet, meaningful patterns rely not only on the recognition of attributional similarities but also dissimilarities. Just as the perception of images rests on the juxtaposition of lightness and darkness, reasoning relationally requires systematic attention to both similarities and dissimilarities. With that awareness, my colleagues and I have expanded the study of relational reasoning beyond analogous reasoning and attributional similarities to highlight forms based on the nature of core dissimilarities: anomalous, antinomous, and antithetical reasoning. In this presentation, I will delineate the character of these relational reasoning forms; summarize procedures and measures used to assess them; overview key research findings; and describe how the forms of relational reasoning work together in the performance of complex problem solving. Finally, I will share critical next steps for research which has implications for instructional practice.

SeminarNeuroscienceRecording

Analyzing Retinal Disease Using Electron Microscopic Connectomics

John Dowling
Harvard University
Sep 14, 2021

John DowlingJohn E. Dowling received his AB and PhD from Harvard University. He taught in the Biology Department at Harvard from 1961 to 1964, first as an Instructor, then as assistant professor. In 1964 he moved to Johns Hopkins University, where he held an appointment as associate professor of Ophthalmology and Biophysics. He returned to Harvard as professor of Biology in 1971, was the Maria Moors Cabot Professor of Natural Sciences from 1971-2001, Harvard College professor from 1999-2004 and is presently the Gordon and Llura Gund Professor of Neurosciences. Dowling was chairman of the Biology Department at Harvard from 1975 to 1978 and served as associate dean of the faculty of Arts and Sciences from 1980 to 1984. He was Master of Leverett House at Harvard from 1981-1998 and currently serves as president of the Corporation of The Marine Biological Laboratory in Woods Hole. He is a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Sciences and a member of the American Philosophical Society. Awards that Dowling received include the Friedenwald Medal from the Association of Research in Ophthalmology and Vision in 1970, the Annual Award of the New England Ophthalmological Society in 1979, the Retinal Research Foundation Award for Retinal Research in 1981, an Alcon Vision Research Recognition Award in 1986, a National Eye Institute's MERIT award in 1987, the Von Sallman Prize in 1992, The Helen Keller Prize for Vision Research in 2000 and the Llura Ligget Gund Award for Lifetime Achievement and Recognition of Contribution to the Foundation Fighting Blindness in 2001. He was granted an honorary MD degree by the University of Lund (Sweden) in 1982 and an honorary Doctor of Laws degree from Dalhousie University (Canada) in 2012. Dowling's research interests have focused on the vertebrate retina as a model piece of the brain. He and his collaborators have long been interested in the functional organization of the retina, studying its synaptic organization, the electrical responses of the retinal neurons, and the mechanisms underlying neurotransmission and neuromodulation in the retina. Dowling became interested in zebrafish as a system in which one could explore the development and genetics of the vertebrate retina about 20 years ago. Part of his research team has focused on retinal development in zebrafish and the role of retinoic acid in early eye and photoreceptor development. A second group has developed behavioral tests to isolate mutations, both recessive and dominant, specific to the visual system.

SeminarNeuroscienceRecording

Music training effects on multisensory and cross-sensory transfer processing: from cross-sectional to RCT studies

Karin Petrini
University of Bath
Sep 8, 2021
SeminarPsychology

Statistical Summary Representations in Identity Learning: Exemplar-Independent Incidental Recognition

Yaren Koca
University of Regina
Aug 25, 2021

The literature suggests that ensemble coding, the ability to represent the gist of sets, may be an underlying mechanism for becoming familiar with newly encountered faces. This phenomenon was investigated by introducing a new training paradigm that involves incidental learning of target identities interspersed among distractors. The effectiveness of this training paradigm was explored in Study 1, which revealed that unfamiliar observers who learned the faces incidentally performed just as well as the observers who were instructed to learn the faces, and the intervening distractors did not disrupt familiarization. Using the same training paradigm, ensemble coding was investigated as an underlying mechanism for face familiarization in Study 2 by measuring familiarity with the targets at different time points using average images created either by seen or unseen encounters of the target. The results revealed that observers whose familiarity was tested using seen averages outperformed the observers who were tested using unseen averages, however, this discrepancy diminished over time. In other words, successful recognition of the target faces became less reliant on the previously encountered exemplars over time, suggesting an exemplar-independent representation that is likely achieved through ensemble coding. Taken together, the results from the current experiment provide direct evidence for ensemble coding as a viable underlying mechanism for face familiarization, that faces that are interspersed among distractors can be learned incidentally.

SeminarPhysics of Life

Active recognition at immune cell interfaces

Shenshen Wang
UC Los Angeles
Aug 12, 2021
SeminarPsychology

Characterising the brain representations behind variations in real-world visual behaviour

Simon Faghel-Soubeyrand
Université de Montréal
Aug 4, 2021

Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.

SeminarNeuroscienceRecording

Face distortions as a window into face perception

Brad Duchaine
Dartmouth
Aug 2, 2021

Prosopometamorphopsia (PMO) is a disorder characterized by face perception distortions. People with PMO see facial features that appear to melt, stretch, and change size and position. I'll discuss research on PMO carried out by my lab and others that sheds light on the cognitive and neural organization of face perception. https://facedistortion.faceblind.org/

SeminarPsychology

Investigating visual recognition and the temporal lobes using electrophysiology and fast periodic visual stimulation

Angelique Volfart
University of Louvain
Jun 23, 2021

The ventral visual pathway extends from the occipital to the anterior temporal regions, and is specialized in giving meaning to objects and people that are perceived through vision. Numerous studies in functional magnetic resonance imaging have focused on the cerebral basis of visual recognition. However, this technique is susceptible to magnetic artefacts in ventral anterior temporal regions and it has led to an underestimation of the role of these regions within the ventral visual stream, especially with respect to face recognition and semantic representations. Moreover, there is an increasing need for implicit methods assessing these functions as explicit tasks lack specificity. In this talk, I will present three studies using fast periodic visual stimulation (FPVS) in combination with scalp and/or intracerebral EEG to overcome these limitations and provide high SNR in temporal regions. I will show that, beyond face recognition, FPVS can be extended to investigate semantic representations using a face-name association paradigm and a semantic categorisation paradigm with written words. These results shed new light on the role of temporal regions and demonstrate the high potential of the FPVS approach as a powerful electrophysiological tool to assess various cognitive functions in neurotypical and clinical populations.

SeminarNeuroscience

The quest for the cortical algorithm

Helmut Linde
Merck KGaA, Darmstadt, Germany
Jun 16, 2021

The cortical algorithm hypothesis states that there is one common computational framework to solve diverse cognitive problems such as vision, voice recognition and motion control. In my talk, I propose a strategy to guide the search for this algorithm and I present a few ideas on how some of its components might look like. I'll explain why a highly interdisciplinary approach is needed from neuroscience, computer science, mathematics and physics to make further progress in this important question.

SeminarNeuroscience

Imaging memory consolidation in wakefulness and sleep

Monika Schönauer
Albert-Ludwigs-Univery of Freiburg
Jun 16, 2021

New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.

ePoster

Non-Human Recognition of Orthography: How is it implemented and how does it differ from Human orthographic processing

Benjamin Gagl, Ivonne Weyers, Susanne Eisenhauer, Christian Fiebach, Michael Colombo, Damian Scarf, Johannes Ziegler, Jonathan Grainger, Onur Güntürkün, Jutta Mueller

Bernstein Conference 2024

ePoster

Action recognition best explains neural activity in cuneate nucleus

COSYNE 2022

ePoster

Do better object recognition models improve the generalization gap in neural predictivity?

COSYNE 2022

ePoster

Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior

COSYNE 2022

ePoster

Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior

COSYNE 2022

ePoster

Distinct roles of excitatory and inhibitory neurons in the macaque IT cortex in object recognition

Sachi Sanghavi & Kohitij Kar

COSYNE 2023

ePoster

Leveraging computational and animal models of vision to probe atypical emotion recognition in autism

Hamid Ramezanpour & Kohitij Kar

COSYNE 2023

ePoster

On-line SEUDO for real-time cell recognition in Calcium Imaging

Iuliia Dmitrieva, Sergey Babkin, Adam Charles

COSYNE 2023

ePoster

Spatial-frequency channels for object recognition by neural networks are twice as wide as those of humans

Ajay Subramanian, Elena Sizikova, Najib Majaj, Denis G. Pelli

COSYNE 2023

ePoster

Temporal pattern recognition in retinal ganglion cells is mediated by dynamical inhibitory synapses

Simone Ebert, Thomas Buffet, Semihchan Sermat, Olivier Marre, Bruno Cessac

COSYNE 2023

ePoster

Geometric Signatures of Speech Recognition: Insights from Deep Neural Networks to the Brain

Jiaqi Shang, Shailee Jain, Haim Sompolinsky, Edward Chang

COSYNE 2025

ePoster

The analysis of the OXT-DA interaction causing social recognition deficit in Syntaxin1A KO

Tomonori Fujiwara, Kofuji Takefumi, Tatsuya Mishima, Toshiki Furukawa

FENS Forum 2024

ePoster

Behavioral impacts of simulated microgravity on male mice: Locomotion, social interactions and memory in a novel object recognition task

Jean-Luc Morel, Margot Issertine, Thomas Brioche, Angèle Chopard, Laurence Vico, Julie Le Merrer, Théo Fovet, Jérôme Becker

FENS Forum 2024

ePoster

The cortical amygdala mediates individual recognition in mice

Manuel Esteban Vila Martín, Anna Teruel Sanchis, Camila Savarelli Balsamo, Lorena Jiménez Romero, Joana Martínez Ricós, Vicent Teruel Martí, Enrique Lanuza

FENS Forum 2024

ePoster

A deep learning approach for the recognition of behaviors in the forced swim test

Andrea Della Valle, Sara De Carlo, Francesca Petetta, Gregorio Sonsini, Sikandar Ali, Roberto Ciccocioppo, Massimo Ubaldi

FENS Forum 2024

ePoster

Direct electrical stimulation of the human amygdala enhances recognition memory for objects but not scenes

Krista Wahlstrom, Justin Campbell, Martina Hollearn, Markus Adamek, James Swift, Lou Blanpain, Tao Xie, Peter Brunner, Stephan Hamann, Amir Arain, Lawrence Eisenman, Joseph Manns, Jon Willie, Cory Inman

FENS Forum 2024

ePoster

Two distinct ways to form long-term object recognition memory during sleep and wakefulness

Max Harkotte, Anuck Sawangjit, Carlos Oyanedel, Niels Niethard, Jan Born, Marion Inostroza

FENS Forum 2024

ePoster

Early disruption in social recognition and its impact on episodic memory in triple transgenic mice model of Alzheimer’s disease

Anna Teruel-Sanchis, Manuel Esteban Vila-Martín, Camila Alexia Savarelli-Balsamo, Lorena Jiménez-Romero, Antonio García-de-León, Javier Zaplana-Gil, Joana Martinez-Ricos, Vicent Teruel-Martí, Enrique Lanuza-Navarro

FENS Forum 2024

ePoster

Evaluation of novel object recognition test results of rats injected with intracerebroventricular streptozocin to develop Alzheimer's disease models

Berna Özen, Hasan Raci Yananlı

FENS Forum 2024

ePoster

HBK-15 rescues recognition memory in MK-801- and stress-induced cognitive impairments in female mice

Aleksandra Koszałka, Kinga Sałaciak, Klaudia Lustyk, Henryk Marona, Karolina Pytka

FENS Forum 2024

ePoster

Homecage-based unsupervised novel object recognition in mice

Sui Hin Ho, Nejc Kejzar, Marius Bauza, Julija Krupic

FENS Forum 2024

ePoster

Interaction of sex and sleep on performance at the novel object recognition task in mice

Farahnaz Yazdanpanah Faragheh, Julie Seibt

FENS Forum 2024

ePoster

Investigating the recruitment of parvalbumin and somatostatin interneurons into engrams for associative recognition memory

Lucinda Hamilton-Burns, Clea Warburton, Gareth Barker

FENS Forum 2024

ePoster

Mouse can recognize other individuals: Maternal exposure to dioxin does not affect identification but perturbs the recognition ability of other individuals

Hana Ichihara, Fumihiko Maekawa, Masaki Kakeyama

FENS Forum 2024

ePoster

Myoelectric gesture recognition in patients with spinal cord injury using a medium-density EMG system

Elena Losanno, Matteo Ceradini, Vincent Mendez, Firman Isma Serdana, Gabriele Righi, Fiorenzo Artoni, Giulio Del Popolo, Solaiman Shokur, Silvestro Micera

FENS Forum 2024

ePoster

Noradrenergic modulation of recognition memory in male and female mice

Lorena Roselló-Jiménez, Olga Rodríguez-Borillo, Raúl Pastor, Laura Font

FENS Forum 2024

ePoster

The processing of spatial frequencies through time in visual word recognition

Clémence Bertrand Pilon, Martin Arguin

FENS Forum 2024

ePoster

Resonant song recognition in crickets

Winston Mann, Jan Clemens

FENS Forum 2024

ePoster

Recognition of complex spatial environments showed dimorphic patterns of theta (4-8 Hz) activity

Joaquín Castillo Escamilla, María del Mar Salvador Viñas, José Manuel Cimadevilla Redondo

FENS Forum 2024

ePoster

Robustness and evolvability in a model of a pattern recognition network

Daesung Cho, Jan Clemens

FENS Forum 2024

ePoster

Scent of a memory: Dissecting the vomeronasal-hippocampal axis in social recognition

Camila Alexia Savarelli Balsamo, Manuel Esteban Vila-Martín, Anna Teruel-Sanchis, Lorena Jiménez-Romero, María Sancho-Alonso, Joana Martínez-Ricós, Vicent Teruel-Martí, Enrique Lanuza

FENS Forum 2024

ePoster

Sex-dependent effects of voluntary physical exercise on object recognition memory restoration after traumatic brain injury in middle-aged rats

David Costa, Meritxell Torras-Garcia, Odette Estrella, Isabel Portell-Cortés, Gemma Manich, Beatriz Almolda, Berta González, Margalida Coll-Andreu

FENS Forum 2024

ePoster

Sleepless nights, vanishing faces: The effect of sleep deprivation on long-term social recognition memory in mice

Adithya Sarma, Evgeniya Tyumeneva, Junfei Cao, Soraya Smit, Marit Bonne, Fleur Meijer, Jean-Christophe Billeter, Robbert Havekes

FENS Forum 2024

ePoster

Src-NADH dehydrogenase subunit 2 complex and recognition memory of imprinting in domestic chicks

Lela Chitadze, Maia Meparishvili, Vincenzo Lagani, Zaza Khuchua, Brian McCabe, Revaz Solomonia

FENS Forum 2024

ePoster

Unraveling the mechanisms underlying corticosterone-induced impairment in novel object recognition in mice

Julia Welte, Urszula Skupio, Roman Serrat, Francisca Julio-Kalajzić, Doriane Gisquet, Astrid Cannich, Luigi Bellocchio, Francis Chaouloff, Giovanni Marsicano, Sandrine Pouvreau

FENS Forum 2024

ePoster

A virtual-reality task to investigate multisensory object recognition in mice

Veronique Stokkers, Guido T Meijer, Smit Zayel, Jeroen J Bos, Francesco P Battaglia

FENS Forum 2024

ePoster

Early olfactory processing is necessary for the maturation of limbic-hippocampal network and recognition

Yu-Nan Chen

Neuromatch 5