recognition
Latest
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
Of glia and macrophages, signaling hubs in development and homeostasis
We are interested in the biology of macrophages, which represent the first line of defense against pathogens. In Drosophila, the embryonic hemocytes arise from the mesoderm whereas glial cells arise from multipotent precursors in the neurogenic region. These cell types represent, respectively, the macrophages located outside and within the nervous system (similar to vertebrate microglia). Thus, despite their different origin, hemocytes and glia display common functions. In addition, both cell types express the Glide/Gcm transcription factor, which plays an evolutionarily conserved role as an anti-inflammatory factor. Moreover, embryonic hemocytes play an evolutionarily conserved and fundamental role in development. The ability to migrate and to contact different tissues/organs most likely allow macrophages to function as signaling hubs. The function of macrophages beyond the recognition of the non-self calls for revisiting the biology of these heterogeneous and plastic cells in physiological and pathological conditions across evolution.
Decoding mental conflict between reward and curiosity in decision-making
Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Microbial modulation of zebrafish behavior and brain development
There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.
Learning to see stuff
Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.
Analyzing artificial neural networks to understand the brain
In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.
Training Dynamic Spiking Neural Network via Forward Propagation Through Time
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.
Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity
A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
Shallow networks run deep: How peripheral preprocessing facilitates odor classification
Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli. Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Neuroscience of socioeconomic status and poverty: Is it actionable?
SES neuroscience, using imaging and other methods, has revealed generalizations of interest for population neuroscience and the study of individual differences. But beyond its scientific interest, SES is a topic of societal importance. Does neuroscience offer any useful insights for promoting socioeconomic justice and reducing the harms of poverty? In this talk I will use research from my own lab and others’ to argue that SES neuroscience has the potential to contribute to policy in this area, although its application is premature at present. I will also attempt to forecast the ways in which practical solutions to the problems of poverty may emerge from SES neuroscience. Bio: Martha Farah has conducted groundbreaking research on face and object recognition, visual attention, mental imagery, and semantic memory and - in more recent times - has been at the forefront of interdisciplinary research into neuroscience and society. This deals with topics such as using fMRI for lie detection, ethics of cognitive enhancement, and effects of social deprivation on brain development.
New Insights into the Neural Machinery of Face Recognition
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules
Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.
Brain and behavioural impacts of early life adversity
Abuse, neglect, and other forms of uncontrollable stress during childhood and early adolescence can lead to adverse outcomes later in life, including especially perturbations in the regulation of mood and emotional states, and specifically anxiety disorders and depression. However, stress experiences vary from one individual to the next, meaning that causal relationships and mechanistic accounts are often difficult to establish in humans. This interdisciplinary talk considers the value of research in experimental animals where stressor experiences can be tightly controlled and detailed investigations of molecular, cellular, and circuit-level mechanisms can be carried out. The talk will focus on the widely used repeated maternal separation procedure in rats where rat offspring are repeatedly separated from maternal care during early postnatal life. This early life stress has remarkably persistent effects on behaviour with a general recognition that maternally-deprived animals are susceptible to depressive-like phenotypes. The validity of this conclusion will be critically appraised with convergent insights from a recent longitudinal study in maternally separated rats involving translational brain imaging, transcriptomics, and behavioural assessment.
Functional segregation of rostral and caudal hippocampus in associative memory
It has long been established that the hippocampus plays a crucial role for episodic memory. As opposed to the modular approach, now it is generally assumed that being a complex structure, the HC performs multiplex interconnected functions, whose hierarchical organization provides basis for the higher cognitive functions such as semantics-based encoding and retrieval. However, the «where, when and how» properties of distinct memory aspects within and outside the HC are still under debate. Here we used a visual associative memory task as a probe to test the hypothesis about the differential involvement of the rostral and caudal portions of the human hippocampus in memory encoding, recognition and associative recall. In epilepsy patients implanted with stereo-EEG, we show that at retrieval the rostral HC is selectively active for recognition memory, whereas the caudal HC is selectively active for the associative memory. Low frequency desynchronization and high frequency synchronization characterize the temporal dynamic in encoding and retrieval. Therefore, we describe here anatomical segregation in the hippocampal contributions to associative and recognition memory.
Object recognition by touch and other senses
Cross-modality imaging of the neural systems that support executive functions
Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.
A biological model system for studying predictive processing
Despite the increasing recognition of predictive processing in circuit neuroscience, little is known about how it may be implemented in cortical circuits. We set out to develop and characterise a biological model system with layer 5 pyramidal cells in the centre. We aim to gain access to prediction and internal model generating processes by controlling, understanding or monitoring everything else: the sensory environment, feed-forward and feed-back inputs, integrative properties, their spiking activity and output. I’ll show recent work from the lab establishing such a model system both in terms of biology as well as tool development.
Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision
The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.
Hearing in an acoustically varied world
In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.
What does the primary visual cortex tell us about object recognition?
Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.
If we can make computers play chess, why can't we make them see?
If we can make computers play chess and even Jeopardy and Go, then why can't we make them see like us? How does our brain solve the problem of seeing? I will describe some of our recent insights into understanding object recognition in the brain using behavioral, neuronal and computational methods.
Molecular recognition and the assembly of feature-selective retinal circuits
NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1
Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features
It is an open question whether preferences for visual art can be lawfully predicted from the basic constituent elements of a visual image. Here, we developed and tested a computational framework to investigate how aesthetic values are formed. We show that it is possible to explain human preferences for a visual art piece based on a mixture of low- and high-level features of the image. Subjective value ratings could be predicted not only within but also across individuals, using a regression model with a common set of interpretable features. We also show that the features predicting aesthetic preference can emerge hierarchically within a deep convolutional neural network trained only for object recognition. Our findings suggest that human preferences for art can be explained at least in part as a systematic integration over the underlying visual features of an image.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.
Towards a Theory of Human Visual Reasoning
Many tasks that are easy for humans are difficult for machines. In particular, while humans excel at tasks that require generalising across problems, machine systems notably struggle. One such task that has received a good amount of attention is the Synthetic Visual Reasoning Test (SVRT). The SVRT consists of a range of problems where simple visual stimuli must be categorised into one of two categories based on an unknown rule that must be induced. Conventional machine learning approaches perform well only when trained to categorise based on a single rule and are unable to generalise without extensive additional training to tasks with any additional rules. Multiple theories of higher-level cognition posit that humans solve such tasks using structured relational representations. Specifically, people learn rules based on structured representations that generalise to novel instances quickly and easily. We believe it is possible to model this approach in a single system which learns all the required relational representations from scratch and performs tasks such as SVRT in a single run. Here, we present a system which expands the DORA/LISA architecture and augments the existing model with principally novel components, namely a) visual reasoning based on the established theories of recognition by components; b) the process of learning complex relational representations by synthesis (in addition to learning by analysis). The proposed augmented model matches human behaviour on SVRT problems. Moreover, the proposed system stands as perhaps a more realistic account of human cognition, wherein rather than using tools that has been shown successful in the machine learning field to inform psychological theorising, we use established psychological theories to inform developing a machine system.
Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise
Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.
CNStalk: Anatomo-functional organisation of the grasping network in the primate brain
Cortical functions result from the conjoint activity of different, reciprocally connected areas working together as large-scale functionally specialized networks. In the macaque brain, neural tracers and functional data have provided evidence for functionally specialized large-scale cortical networks involving temporal, parietal, and frontal areas. One of these networks, the lateral grasping network, appears to play a primary role in controlling hand action organization and recognition. Available functional and tractograpy data suggest the existence of a human counterpart of this network.
Seeing with technology: Exchanging the senses with sensory substitution and augmentation
What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.
Themes and Variations: Circuit mechanisms of behavioral evolution
Animals exhibit extraordinary variation in their behavior, yet little is known about the neural mechanisms that generate this diversity. My lab has been taking advantage of the rapid diversification of male courtship behaviors in Drosophila to glean insight into how evolution shapes the nervous system to generate species-specific behaviors. By translating neurogenetic tools from D. melanogaster to closely related Drosophila species, we have begun to directly compare the homologous neural circuits and pinpoint sites of adaptive change. Across species, P1 neurons serve as a conserved node in regulating male courtship: these neurons are selectively activated by the sensory cues indicative of an appropriate mate and their activation triggers enduring courtship displays. We have been examining how different sensory pathways converge onto P1 neurons to regulate a male’s state of arousal, honing his pursuit of a prospective partner. Moreover, by performing cross-species comparison of these circuits, we have begun to gain insight into how reweighting of sensory inputs to P1 neurons underlies species-specific mate recognition. Our results suggest how variation at flexible nodes within the nervous system can serve as a substrate for behavioral evolution, shedding light on the types of changes that are possible and preferable within brain circuits.
Analogical Reasoning Plus: Why Dissimilarities Matter
Analogical reasoning remains foundational to the human ability to forge meaningful patterns within the sea of information that continually inundates the senses. Yet, meaningful patterns rely not only on the recognition of attributional similarities but also dissimilarities. Just as the perception of images rests on the juxtaposition of lightness and darkness, reasoning relationally requires systematic attention to both similarities and dissimilarities. With that awareness, my colleagues and I have expanded the study of relational reasoning beyond analogous reasoning and attributional similarities to highlight forms based on the nature of core dissimilarities: anomalous, antinomous, and antithetical reasoning. In this presentation, I will delineate the character of these relational reasoning forms; summarize procedures and measures used to assess them; overview key research findings; and describe how the forms of relational reasoning work together in the performance of complex problem solving. Finally, I will share critical next steps for research which has implications for instructional practice.
Analyzing Retinal Disease Using Electron Microscopic Connectomics
John DowlingJohn E. Dowling received his AB and PhD from Harvard University. He taught in the Biology Department at Harvard from 1961 to 1964, first as an Instructor, then as assistant professor. In 1964 he moved to Johns Hopkins University, where he held an appointment as associate professor of Ophthalmology and Biophysics. He returned to Harvard as professor of Biology in 1971, was the Maria Moors Cabot Professor of Natural Sciences from 1971-2001, Harvard College professor from 1999-2004 and is presently the Gordon and Llura Gund Professor of Neurosciences. Dowling was chairman of the Biology Department at Harvard from 1975 to 1978 and served as associate dean of the faculty of Arts and Sciences from 1980 to 1984. He was Master of Leverett House at Harvard from 1981-1998 and currently serves as president of the Corporation of The Marine Biological Laboratory in Woods Hole. He is a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Sciences and a member of the American Philosophical Society. Awards that Dowling received include the Friedenwald Medal from the Association of Research in Ophthalmology and Vision in 1970, the Annual Award of the New England Ophthalmological Society in 1979, the Retinal Research Foundation Award for Retinal Research in 1981, an Alcon Vision Research Recognition Award in 1986, a National Eye Institute's MERIT award in 1987, the Von Sallman Prize in 1992, The Helen Keller Prize for Vision Research in 2000 and the Llura Ligget Gund Award for Lifetime Achievement and Recognition of Contribution to the Foundation Fighting Blindness in 2001. He was granted an honorary MD degree by the University of Lund (Sweden) in 1982 and an honorary Doctor of Laws degree from Dalhousie University (Canada) in 2012. Dowling's research interests have focused on the vertebrate retina as a model piece of the brain. He and his collaborators have long been interested in the functional organization of the retina, studying its synaptic organization, the electrical responses of the retinal neurons, and the mechanisms underlying neurotransmission and neuromodulation in the retina. Dowling became interested in zebrafish as a system in which one could explore the development and genetics of the vertebrate retina about 20 years ago. Part of his research team has focused on retinal development in zebrafish and the role of retinoic acid in early eye and photoreceptor development. A second group has developed behavioral tests to isolate mutations, both recessive and dominant, specific to the visual system.
Music training effects on multisensory and cross-sensory transfer processing: from cross-sectional to RCT studies
Face distortions as a window into face perception
Prosopometamorphopsia (PMO) is a disorder characterized by face perception distortions. People with PMO see facial features that appear to melt, stretch, and change size and position. I'll discuss research on PMO carried out by my lab and others that sheds light on the cognitive and neural organization of face perception. https://facedistortion.faceblind.org/
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
The quest for the cortical algorithm
The cortical algorithm hypothesis states that there is one common computational framework to solve diverse cognitive problems such as vision, voice recognition and motion control. In my talk, I propose a strategy to guide the search for this algorithm and I present a few ideas on how some of its components might look like. I'll explain why a highly interdisciplinary approach is needed from neuroscience, computer science, mathematics and physics to make further progress in this important question.
Towards a neurally mechanistic understanding of visual cognition
I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!
MRI pattern recognition in leukodystrophies
Untitled Seminar
Sean Miller will present "From brain wiring to synaptic physiology - reuse of a cell recognition molecule to carry out higher order nervous system functions". Then, Patricia Jusuf will talk about " Visual vertebrate pipeline for assessing novel human GWAS gene candidates". Victor Borrell with deal with the "Genetic evolution of cerebral cortex size determinants" and Louise Cheng will present
The neuroscience of color and what makes primates special
Among mammals, excellent color vision has evolved only in certain non-human primates. And yet, color is often assumed to be just a low-level stimulus feature with a modest role in encoding and recognizing objects. The rationale for this dogma is compelling: object recognition is excellent in grayscale images (consider black-and-white movies, where faces, places, objects, and story are readily apparent). In my talk I will discuss experiments in which we used color as a tool to uncover an organizational plan in inferior temporal cortex (parallel, multistage processing for places, faces, colors, and objects) and a visual-stimulus functional representation in prefrontal cortex (PFC). The discovery of an extensive network of color-biased domains within IT and PFC, regions implicated in high-level object vision and executive functions, compels a re-evaluation of the role of color in behavior. I will discuss behavioral studies prompted by the neurobiology that uncover a universal principle for color categorization across languages, the first systematic study of the color statistics of objects and a chromatic mechanism by which the brain may compute animacy, and a surprising paradoxical impact of memory on face color. Taken together, my talk will put forward the argument that color is not primarily for object recognition, but rather for the assessment of the likely behavioral relevance, or meaning, of the stuff we see.
What does the primary visual cortex tell us about object recognition?
Sensory-motor control, cognition and brain evolution: exploring the links
Drawing on recent findings from evolutionary anthropology and neuroscience, professor Barton will lead us through the amazing story of the evolution of human cognition. Usingstatistical, phylogenetic analyses that tease apart the variation associated with different neural systems and due to different selection pressures, he will be addressing intriguing questions like ‘Why are there so many neurons in the cerebellum?’, ‘Is the neocortex the ‘intelligent’ bit of the brain?’, and ‘What explains that the recognition by humans of emotional expressions is disrupted by trancranial magnetic stimulation of the somatosensory cortex?’ Could, as professor Barton suggests, the cerebellum -modestly concealed beneath the volumetrically dominating neocortex and largely ignored- turn out to be the Cinderella of the study of brain evolution?
Hebbian learning, its inference, and brain oscillation
Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.
Non-Human Recognition of Orthography: How is it implemented and how does it differ from Human orthographic processing
Bernstein Conference 2024
Action recognition best explains neural activity in cuneate nucleus
COSYNE 2022
Do better object recognition models improve the generalization gap in neural predictivity?
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Distinct roles of excitatory and inhibitory neurons in the macaque IT cortex in object recognition
COSYNE 2023
Leveraging computational and animal models of vision to probe atypical emotion recognition in autism
COSYNE 2023
On-line SEUDO for real-time cell recognition in Calcium Imaging
COSYNE 2023
Spatial-frequency channels for object recognition by neural networks are twice as wide as those of humans
COSYNE 2023
Temporal pattern recognition in retinal ganglion cells is mediated by dynamical inhibitory synapses
COSYNE 2023
Geometric Signatures of Speech Recognition: Insights from Deep Neural Networks to the Brain
COSYNE 2025
The analysis of the OXT-DA interaction causing social recognition deficit in Syntaxin1A KO
FENS Forum 2024
Behavioral impacts of simulated microgravity on male mice: Locomotion, social interactions and memory in a novel object recognition task
FENS Forum 2024
The cortical amygdala mediates individual recognition in mice
FENS Forum 2024
A deep learning approach for the recognition of behaviors in the forced swim test
FENS Forum 2024
Direct electrical stimulation of the human amygdala enhances recognition memory for objects but not scenes
FENS Forum 2024
Two distinct ways to form long-term object recognition memory during sleep and wakefulness
FENS Forum 2024
Early disruption in social recognition and its impact on episodic memory in triple transgenic mice model of Alzheimer’s disease
FENS Forum 2024
Evaluation of novel object recognition test results of rats injected with intracerebroventricular streptozocin to develop Alzheimer's disease models
FENS Forum 2024
HBK-15 rescues recognition memory in MK-801- and stress-induced cognitive impairments in female mice
FENS Forum 2024
Homecage-based unsupervised novel object recognition in mice
FENS Forum 2024
Interaction of sex and sleep on performance at the novel object recognition task in mice
FENS Forum 2024
Investigating the recruitment of parvalbumin and somatostatin interneurons into engrams for associative recognition memory
FENS Forum 2024
Mouse can recognize other individuals: Maternal exposure to dioxin does not affect identification but perturbs the recognition ability of other individuals
FENS Forum 2024
Myoelectric gesture recognition in patients with spinal cord injury using a medium-density EMG system
FENS Forum 2024
Noradrenergic modulation of recognition memory in male and female mice
FENS Forum 2024
The processing of spatial frequencies through time in visual word recognition
FENS Forum 2024
Recognition of complex spatial environments showed dimorphic patterns of theta (4-8 Hz) activity
FENS Forum 2024
Resonant song recognition in crickets
FENS Forum 2024
Robustness and evolvability in a model of a pattern recognition network
FENS Forum 2024
Scent of a memory: Dissecting the vomeronasal-hippocampal axis in social recognition
FENS Forum 2024
Sex-dependent effects of voluntary physical exercise on object recognition memory restoration after traumatic brain injury in middle-aged rats
FENS Forum 2024
Sleepless nights, vanishing faces: The effect of sleep deprivation on long-term social recognition memory in mice
FENS Forum 2024
Src-NADH dehydrogenase subunit 2 complex and recognition memory of imprinting in domestic chicks
FENS Forum 2024
Unraveling the mechanisms underlying corticosterone-induced impairment in novel object recognition in mice
FENS Forum 2024
A virtual-reality task to investigate multisensory object recognition in mice
FENS Forum 2024
Early olfactory processing is necessary for the maturation of limbic-hippocampal network and recognition
Neuromatch 5
recognition coverage
87 items