TopicNeuro

recognition

50 Seminars37 ePosters

Latest

SeminarNeuroscience

Single-neuron correlates of perception and memory in the human medial temporal lobe

Prof. Dr. Dr. Florian Mormann
University of Bonn, Germany
May 14, 2025

The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.

SeminarNeuroscience

Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge

Jorge Almeida
University of Coimbra
Jan 28, 2025

Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.

SeminarNeuroscience

Decoding mental conflict between reward and curiosity in decision-making

Naoki Honda
Hiroshima University
Jul 10, 2023

Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.

SeminarNeuroscienceRecording

Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness

Sharon Gilad-Gutnick
MIT
Jun 20, 2023

Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.

SeminarNeuroscience

Microbial modulation of zebrafish behavior and brain development

Judith S. Eisen
University of Oregon
May 16, 2023

There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.

SeminarNeuroscience

Learning to see stuff

Roland W. Fleming
Giessen University
Mar 13, 2023

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.

SeminarNeuroscience

Analyzing artificial neural networks to understand the brain

Grace Lindsay
NYU
Dec 16, 2022

In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 10, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 9, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscienceRecording

Shallow networks run deep: How peripheral preprocessing facilitates odor classification

Yonatan Aljadeff
University of California, San Diego (UCSD)
Nov 9, 2022

Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli. Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu.

SeminarNeuroscience

New Insights into the Neural Machinery of Face Recognition

Winrich Freiwald
Rockefeller
Jul 12, 2022
SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 22, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules

Megan Porter
University of Hawaii
May 2, 2022

Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.

SeminarNeuroscienceRecording

Object recognition by touch and other senses

Roberta Klatzky
Carnegie Mellon University
Mar 3, 2022
SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Mar 1, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscience

A biological model system for studying predictive processing

Ede Rancz
University of Oxford
Feb 24, 2022

Despite the increasing recognition of predictive processing in circuit neuroscience, little is known about how it may be implemented in cortical circuits. We set out to develop and characterise a biological model system with layer 5 pyramidal cells in the centre. We aim to gain access to prediction and internal model generating processes by controlling, understanding or monitoring everything else: the sensory environment, feed-forward and feed-back inputs, integrative properties, their spiking activity and output. I’ll show recent work from the lab establishing such a model system both in terms of biology as well as tool development.

SeminarNeuroscience

Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision

Veeky Baths
Cognitive Neuroscience Lab (Bits Pilani Goa Campus)
Feb 3, 2022

The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.

SeminarNeuroscience

Hearing in an acoustically varied world

Kerry Walker
University of Oxford
Jan 25, 2022

In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 24, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Molecular recognition and the assembly of feature-selective retinal circuits

Arjun Krishnaswamy
Department of Physiology, McGill University
Dec 14, 2021
SeminarNeuroscienceRecording

NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1

Dori Grijseels (they/them)
University of Sussex
Dec 2, 2021

Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.

SeminarNeuroscienceRecording

NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing

Nick Sexton (he/him)
University College London
Dec 1, 2021

A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.

SeminarNeuroscienceRecording

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard
Oct 26, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.

SeminarNeuroscienceRecording

Towards a Theory of Human Visual Reasoning

Ekaterina Shurkova
University of Edinburgh
Oct 14, 2021

Many tasks that are easy for humans are difficult for machines. In particular, while humans excel at tasks that require generalising across problems, machine systems notably struggle. One such task that has received a good amount of attention is the Synthetic Visual Reasoning Test (SVRT). The SVRT consists of a range of problems where simple visual stimuli must be categorised into one of two categories based on an unknown rule that must be induced. Conventional machine learning approaches perform well only when trained to categorise based on a single rule and are unable to generalise without extensive additional training to tasks with any additional rules. Multiple theories of higher-level cognition posit that humans solve such tasks using structured relational representations. Specifically, people learn rules based on structured representations that generalise to novel instances quickly and easily. We believe it is possible to model this approach in a single system which learns all the required relational representations from scratch and performs tasks such as SVRT in a single run. Here, we present a system which expands the DORA/LISA architecture and augments the existing model with principally novel components, namely a) visual reasoning based on the established theories of recognition by components; b) the process of learning complex relational representations by synthesis (in addition to learning by analysis). The proposed augmented model matches human behaviour on SVRT problems. Moreover, the proposed system stands as perhaps a more realistic account of human cognition, wherein rather than using tools that has been shown successful in the machine learning field to inform psychological theorising, we use established psychological theories to inform developing a machine system.

SeminarNeuroscienceRecording

Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise

Monty Escabi
University of Connecticut
Oct 1, 2021

Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.

SeminarNeuroscienceRecording

Seeing with technology: Exchanging the senses with sensory substitution and augmentation

Michael Proulx
University of Bath
Sep 30, 2021

What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.

SeminarNeuroscience

Themes and Variations: Circuit mechanisms of behavioral evolution

Vanessa Ruta
The Rockefeller University, New York, USA
Sep 29, 2021

Animals exhibit extraordinary variation in their behavior, yet little is known about the neural mechanisms that generate this diversity. My lab has been taking advantage of the rapid diversification of male courtship behaviors in Drosophila to glean insight into how evolution shapes the nervous system to generate species-specific behaviors. By translating neurogenetic tools from D. melanogaster to closely related Drosophila species, we have begun to directly compare the homologous neural circuits and pinpoint sites of adaptive change. Across species, P1 neurons serve as a conserved node in regulating male courtship: these neurons are selectively activated by the sensory cues indicative of an appropriate mate and their activation triggers enduring courtship displays. We have been examining how different sensory pathways converge onto P1 neurons to regulate a male’s state of arousal, honing his pursuit of a prospective partner. Moreover, by performing cross-species comparison of these circuits, we have begun to gain insight into how reweighting of sensory inputs to P1 neurons underlies species-specific mate recognition. Our results suggest how variation at flexible nodes within the nervous system can serve as a substrate for behavioral evolution, shedding light on the types of changes that are possible and preferable within brain circuits.

SeminarNeuroscienceRecording

Analogical Reasoning Plus: Why Dissimilarities Matter

Patricia A. Alexander
University of Maryland
Sep 23, 2021

Analogical reasoning remains foundational to the human ability to forge meaningful patterns within the sea of information that continually inundates the senses. Yet, meaningful patterns rely not only on the recognition of attributional similarities but also dissimilarities. Just as the perception of images rests on the juxtaposition of lightness and darkness, reasoning relationally requires systematic attention to both similarities and dissimilarities. With that awareness, my colleagues and I have expanded the study of relational reasoning beyond analogous reasoning and attributional similarities to highlight forms based on the nature of core dissimilarities: anomalous, antinomous, and antithetical reasoning. In this presentation, I will delineate the character of these relational reasoning forms; summarize procedures and measures used to assess them; overview key research findings; and describe how the forms of relational reasoning work together in the performance of complex problem solving. Finally, I will share critical next steps for research which has implications for instructional practice.

SeminarNeuroscienceRecording

Analyzing Retinal Disease Using Electron Microscopic Connectomics

John Dowling
Harvard University
Sep 15, 2021

John DowlingJohn E. Dowling received his AB and PhD from Harvard University. He taught in the Biology Department at Harvard from 1961 to 1964, first as an Instructor, then as assistant professor. In 1964 he moved to Johns Hopkins University, where he held an appointment as associate professor of Ophthalmology and Biophysics. He returned to Harvard as professor of Biology in 1971, was the Maria Moors Cabot Professor of Natural Sciences from 1971-2001, Harvard College professor from 1999-2004 and is presently the Gordon and Llura Gund Professor of Neurosciences. Dowling was chairman of the Biology Department at Harvard from 1975 to 1978 and served as associate dean of the faculty of Arts and Sciences from 1980 to 1984. He was Master of Leverett House at Harvard from 1981-1998 and currently serves as president of the Corporation of The Marine Biological Laboratory in Woods Hole. He is a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Sciences and a member of the American Philosophical Society. Awards that Dowling received include the Friedenwald Medal from the Association of Research in Ophthalmology and Vision in 1970, the Annual Award of the New England Ophthalmological Society in 1979, the Retinal Research Foundation Award for Retinal Research in 1981, an Alcon Vision Research Recognition Award in 1986, a National Eye Institute's MERIT award in 1987, the Von Sallman Prize in 1992, The Helen Keller Prize for Vision Research in 2000 and the Llura Ligget Gund Award for Lifetime Achievement and Recognition of Contribution to the Foundation Fighting Blindness in 2001. He was granted an honorary MD degree by the University of Lund (Sweden) in 1982 and an honorary Doctor of Laws degree from Dalhousie University (Canada) in 2012. Dowling's research interests have focused on the vertebrate retina as a model piece of the brain. He and his collaborators have long been interested in the functional organization of the retina, studying its synaptic organization, the electrical responses of the retinal neurons, and the mechanisms underlying neurotransmission and neuromodulation in the retina. Dowling became interested in zebrafish as a system in which one could explore the development and genetics of the vertebrate retina about 20 years ago. Part of his research team has focused on retinal development in zebrafish and the role of retinoic acid in early eye and photoreceptor development. A second group has developed behavioral tests to isolate mutations, both recessive and dominant, specific to the visual system.

SeminarNeuroscienceRecording

Music training effects on multisensory and cross-sensory transfer processing: from cross-sectional to RCT studies

Karin Petrini
University of Bath
Sep 9, 2021
SeminarNeuroscienceRecording

Face distortions as a window into face perception

Brad Duchaine
Dartmouth
Aug 3, 2021

Prosopometamorphopsia (PMO) is a disorder characterized by face perception distortions. People with PMO see facial features that appear to melt, stretch, and change size and position. I'll discuss research on PMO carried out by my lab and others that sheds light on the cognitive and neural organization of face perception. https://facedistortion.faceblind.org/

SeminarNeuroscience

Imaging memory consolidation in wakefulness and sleep

Monika Schönauer
Albert-Ludwigs-Univery of Freiburg
Jun 17, 2021

New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.

SeminarNeuroscience

The quest for the cortical algorithm

Helmut Linde
Merck KGaA, Darmstadt, Germany
Jun 17, 2021

The cortical algorithm hypothesis states that there is one common computational framework to solve diverse cognitive problems such as vision, voice recognition and motion control. In my talk, I propose a strategy to guide the search for this algorithm and I present a few ideas on how some of its components might look like. I'll explain why a highly interdisciplinary approach is needed from neuroscience, computer science, mathematics and physics to make further progress in this important question.

SeminarNeuroscience

Towards a neurally mechanistic understanding of visual cognition

Kohitij Kar
Massachusetts Institute of Technology
Jun 14, 2021

I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

SeminarNeuroscienceRecording

MRI pattern recognition in leukodystrophies

Nicole Wolf
Emma Children’s Hospital, Amsterdam University Medical Centre, the Netherlands
Jun 8, 2021
SeminarNeuroscience

Untitled Seminar

Sean Millard (Brisbane, Australia), Patricia Jusuf (Melbourne, Australia), Victor Borrell (Alicante, Spain), Louise Cheng (Melbourne, Australia)
May 27, 2021

Sean Miller will present "From brain wiring to synaptic physiology - reuse of a cell recognition molecule to carry out higher order nervous system functions". Then, Patricia Jusuf will talk about " Visual vertebrate pipeline for assessing novel human GWAS gene candidates". Victor Borrell with deal with the "Genetic evolution of cerebral cortex size determinants" and Louise Cheng will present

SeminarNeuroscienceRecording

The neuroscience of color and what makes primates special

Bevil Conway
NIH
May 11, 2021

Among mammals, excellent color vision has evolved only in certain non-human primates. And yet, color is often assumed to be just a low-level stimulus feature with a modest role in encoding and recognizing objects. The rationale for this dogma is compelling: object recognition is excellent in grayscale images (consider black-and-white movies, where faces, places, objects, and story are readily apparent). In my talk I will discuss experiments in which we used color as a tool to uncover an organizational plan in inferior temporal cortex (parallel, multistage processing for places, faces, colors, and objects) and a visual-stimulus functional representation in prefrontal cortex (PFC). The discovery of an extensive network of color-biased domains within IT and PFC, regions implicated in high-level object vision and executive functions, compels a re-evaluation of the role of color in behavior. I will discuss behavioral studies prompted by the neurobiology that uncover a universal principle for color categorization across languages, the first systematic study of the color statistics of objects and a chromatic mechanism by which the brain may compute animacy, and a surprising paradoxical impact of memory on face color. Taken together, my talk will put forward the argument that color is not primarily for object recognition, but rather for the assessment of the likely behavioral relevance, or meaning, of the stuff we see.

SeminarNeuroscienceRecording

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Apr 21, 2021
SeminarNeuroscienceRecording

Hebbian learning, its inference, and brain oscillation

Sukbin Lim
NYU Shanghai
Mar 24, 2021

Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.

ePosterNeuroscience

Non-Human Recognition of Orthography: How is it implemented and how does it differ from Human orthographic processing

Benjamin Gagl, Ivonne Weyers, Susanne Eisenhauer, Christian Fiebach, Michael Colombo, Damian Scarf, Johannes Ziegler, Jonathan Grainger, Onur Güntürkün, Jutta Mueller

Bernstein Conference 2024

ePosterNeuroscience

Action recognition best explains neural activity in cuneate nucleus

Alessandro Marin Vargas,Axel Bisi,Alberto Chiappa,Chris Versteeg,Lee E. Miller,Alexander Mathis

COSYNE 2022

ePosterNeuroscience

Do better object recognition models improve the generalization gap in neural predictivity?

Yifei Ren,Pouya Bashivan

COSYNE 2022

ePosterNeuroscience

Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior

Kohitij Kar,Reese Green,James DiCarlo

COSYNE 2022

ePosterNeuroscience

Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior

Kohitij Kar,Reese Green,James DiCarlo

COSYNE 2022

ePosterNeuroscience

Distinct roles of excitatory and inhibitory neurons in the macaque IT cortex in object recognition

Sachi Sanghavi & Kohitij Kar

COSYNE 2023

ePosterNeuroscience

Leveraging computational and animal models of vision to probe atypical emotion recognition in autism

Hamid Ramezanpour & Kohitij Kar

COSYNE 2023

ePosterNeuroscience

On-line SEUDO for real-time cell recognition in Calcium Imaging

Iuliia Dmitrieva, Sergey Babkin, Adam Charles

COSYNE 2023

ePosterNeuroscience

Spatial-frequency channels for object recognition by neural networks are twice as wide as those of humans

Ajay Subramanian, Elena Sizikova, Najib Majaj, Denis G. Pelli

COSYNE 2023

ePosterNeuroscience

Temporal pattern recognition in retinal ganglion cells is mediated by dynamical inhibitory synapses

Simone Ebert, Thomas Buffet, Semihchan Sermat, Olivier Marre, Bruno Cessac

COSYNE 2023

ePosterNeuroscience

Geometric Signatures of Speech Recognition: Insights from Deep Neural Networks to the Brain

Jiaqi Shang, Shailee Jain, Haim Sompolinsky, Edward Chang

COSYNE 2025

ePosterNeuroscience

The analysis of the OXT-DA interaction causing social recognition deficit in Syntaxin1A KO

Tomonori Fujiwara, Kofuji Takefumi, Tatsuya Mishima, Toshiki Furukawa

FENS Forum 2024

ePosterNeuroscience

Behavioral impacts of simulated microgravity on male mice: Locomotion, social interactions and memory in a novel object recognition task

Jean-Luc Morel, Margot Issertine, Thomas Brioche, Angèle Chopard, Laurence Vico, Julie Le Merrer, Théo Fovet, Jérôme Becker

FENS Forum 2024

ePosterNeuroscience

The cortical amygdala mediates individual recognition in mice

Manuel Esteban Vila Martín, Anna Teruel Sanchis, Camila Savarelli Balsamo, Lorena Jiménez Romero, Joana Martínez Ricós, Vicent Teruel Martí, Enrique Lanuza

FENS Forum 2024

ePosterNeuroscience

A deep learning approach for the recognition of behaviors in the forced swim test

Andrea Della Valle, Sara De Carlo, Francesca Petetta, Gregorio Sonsini, Sikandar Ali, Roberto Ciccocioppo, Massimo Ubaldi

FENS Forum 2024

ePosterNeuroscience

Direct electrical stimulation of the human amygdala enhances recognition memory for objects but not scenes

Krista Wahlstrom, Justin Campbell, Martina Hollearn, Markus Adamek, James Swift, Lou Blanpain, Tao Xie, Peter Brunner, Stephan Hamann, Amir Arain, Lawrence Eisenman, Joseph Manns, Jon Willie, Cory Inman

FENS Forum 2024

ePosterNeuroscience

Two distinct ways to form long-term object recognition memory during sleep and wakefulness

Max Harkotte, Anuck Sawangjit, Carlos Oyanedel, Niels Niethard, Jan Born, Marion Inostroza

FENS Forum 2024

ePosterNeuroscience

Early disruption in social recognition and its impact on episodic memory in triple transgenic mice model of Alzheimer’s disease

Anna Teruel-Sanchis, Manuel Esteban Vila-Martín, Camila Alexia Savarelli-Balsamo, Lorena Jiménez-Romero, Antonio García-de-León, Javier Zaplana-Gil, Joana Martinez-Ricos, Vicent Teruel-Martí, Enrique Lanuza-Navarro

FENS Forum 2024

ePosterNeuroscience

Evaluation of novel object recognition test results of rats injected with intracerebroventricular streptozocin to develop Alzheimer's disease models

Berna Özen, Hasan Raci Yananlı

FENS Forum 2024

ePosterNeuroscience

HBK-15 rescues recognition memory in MK-801- and stress-induced cognitive impairments in female mice

Aleksandra Koszałka, Kinga Sałaciak, Klaudia Lustyk, Henryk Marona, Karolina Pytka

FENS Forum 2024

ePosterNeuroscience

Homecage-based unsupervised novel object recognition in mice

Sui Hin Ho, Nejc Kejzar, Marius Bauza, Julija Krupic

FENS Forum 2024

ePosterNeuroscience

Interaction of sex and sleep on performance at the novel object recognition task in mice

Farahnaz Yazdanpanah Faragheh, Julie Seibt

FENS Forum 2024

ePosterNeuroscience

Investigating the recruitment of parvalbumin and somatostatin interneurons into engrams for associative recognition memory

Lucinda Hamilton-Burns, Clea Warburton, Gareth Barker

FENS Forum 2024

ePosterNeuroscience

Mouse can recognize other individuals: Maternal exposure to dioxin does not affect identification but perturbs the recognition ability of other individuals

Hana Ichihara, Fumihiko Maekawa, Masaki Kakeyama

FENS Forum 2024

ePosterNeuroscience

Myoelectric gesture recognition in patients with spinal cord injury using a medium-density EMG system

Elena Losanno, Matteo Ceradini, Vincent Mendez, Firman Isma Serdana, Gabriele Righi, Fiorenzo Artoni, Giulio Del Popolo, Solaiman Shokur, Silvestro Micera

FENS Forum 2024

ePosterNeuroscience

Noradrenergic modulation of recognition memory in male and female mice

Lorena Roselló-Jiménez, Olga Rodríguez-Borillo, Raúl Pastor, Laura Font

FENS Forum 2024

ePosterNeuroscience

The processing of spatial frequencies through time in visual word recognition

Clémence Bertrand Pilon, Martin Arguin

FENS Forum 2024

ePosterNeuroscience

Recognition of complex spatial environments showed dimorphic patterns of theta (4-8 Hz) activity

Joaquín Castillo Escamilla, María del Mar Salvador Viñas, José Manuel Cimadevilla Redondo

FENS Forum 2024

ePosterNeuroscience

Resonant song recognition in crickets

Winston Mann, Jan Clemens

FENS Forum 2024

ePosterNeuroscience

Robustness and evolvability in a model of a pattern recognition network

Daesung Cho, Jan Clemens

FENS Forum 2024

ePosterNeuroscience

Scent of a memory: Dissecting the vomeronasal-hippocampal axis in social recognition

Camila Alexia Savarelli Balsamo, Manuel Esteban Vila-Martín, Anna Teruel-Sanchis, Lorena Jiménez-Romero, María Sancho-Alonso, Joana Martínez-Ricós, Vicent Teruel-Martí, Enrique Lanuza

FENS Forum 2024

ePosterNeuroscience

Sex-dependent effects of voluntary physical exercise on object recognition memory restoration after traumatic brain injury in middle-aged rats

David Costa, Meritxell Torras-Garcia, Odette Estrella, Isabel Portell-Cortés, Gemma Manich, Beatriz Almolda, Berta González, Margalida Coll-Andreu

FENS Forum 2024

ePosterNeuroscience

Sleepless nights, vanishing faces: The effect of sleep deprivation on long-term social recognition memory in mice

Adithya Sarma, Evgeniya Tyumeneva, Junfei Cao, Soraya Smit, Marit Bonne, Fleur Meijer, Jean-Christophe Billeter, Robbert Havekes

FENS Forum 2024

ePosterNeuroscience

Src-NADH dehydrogenase subunit 2 complex and recognition memory of imprinting in domestic chicks

Lela Chitadze, Maia Meparishvili, Vincenzo Lagani, Zaza Khuchua, Brian McCabe, Revaz Solomonia

FENS Forum 2024

ePosterNeuroscience

Unraveling the mechanisms underlying corticosterone-induced impairment in novel object recognition in mice

Julia Welte, Urszula Skupio, Roman Serrat, Francisca Julio-Kalajzić, Doriane Gisquet, Astrid Cannich, Luigi Bellocchio, Francis Chaouloff, Giovanni Marsicano, Sandrine Pouvreau

FENS Forum 2024

ePosterNeuroscience

A virtual-reality task to investigate multisensory object recognition in mice

Veronique Stokkers, Guido T Meijer, Smit Zayel, Jeroen J Bos, Francesco P Battaglia

FENS Forum 2024

ePosterNeuroscience

Early olfactory processing is necessary for the maturation of limbic-hippocampal network and recognition

Yu-Nan Chen

Neuromatch 5

recognition coverage

87 items

Seminar50
ePoster37
Domain spotlight

Explore how recognition research is advancing inside Neuro.

Visit domain