← Back

Neural

Topic spotlight
TopicWorld Wide

neural representation

Discover seminars, jobs, and research tagged with neural representation across World Wide.
82 curated items45 Seminars37 ePosters
Updated 3 months ago
82 items · neural representation
82 results
SeminarNeuroscience

Neural Representations of Abstract Cognitive Maps in Prefrontal Cortex and Medial Temporal Lobe

Janahan Selvanayagam
University of Oxford
Sep 10, 2025
SeminarNeuroscience

Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades

Andrej Bicanski
Max Planck Institute for Human Cognitive and Brain Sciences
Mar 11, 2025

How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.

SeminarNeuroscience

Dimensionality reduction beyond neural subspaces

Alex Cayco Gajic
École Normale Supérieure
Jan 28, 2025

Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.

SeminarNeuroscience

Algonauts 2023 winning paper journal club (fMRI encoding models)

Huzheng Yang, Paul Scotti
Aug 17, 2023

Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf

SeminarNeuroscience

Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time

Harel Shouval
The University of Texas at Houston
Jun 13, 2023

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

SeminarNeuroscienceRecording

A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time

Domenica Bueti
SISSA
May 8, 2023
SeminarNeuroscienceRecording

A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time

Domenica Bueti
SISSA, Trieste (Italy)
Apr 18, 2023

Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.

SeminarNeuroscienceRecording

Convex neural codes in recurrent networks and sensory systems

Vladimir Itskov
The Pennsylvania State University
Dec 13, 2022

Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscienceRecording

A multi-level account of hippocampal function in concept learning from behavior to neurons

Rob Mok
University of Cambridge
Nov 1, 2022

A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.

SeminarNeuroscience

Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties

SueYeon Chung
NYU/Flatiron
Sep 15, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SeminarNeuroscienceRecording

Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation

Raoul-Martin Memmesheimer
University of Bonn, Germany
Jun 28, 2022

Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.

SeminarNeuroscienceRecording

Neural Circuit Mechanisms of Pattern Separation in the Dentate Gyrus

Alessandro Galloni
Rutgers University
May 31, 2022

The ability to discriminate different sensory patterns by disentangling their neural representations is an important property of neural networks. While a variety of learning rules are known to be highly effective at fine-tuning synapses to achieve this, less is known about how different cell types in the brain can facilitate this process by providing architectural priors that bias the network towards sparse, selective, and discriminable representations. We studied this by simulating a neuronal network modelled on the dentate gyrus—an area characterised by sparse activity associated with pattern separation in spatial memory tasks. To test the contribution of different cell types to these functions, we presented the model with a wide dynamic range of input patterns and systematically added or removed different circuit elements. We found that recruiting feedback inhibition indirectly via recurrent excitatory neurons proved particularly helpful in disentangling patterns, and show that simple alignment principles for excitatory and inhibitory connections are a highly effective strategy.

SeminarNeuroscience

Neural Representations of Social Homeostasis

Kay M. Tye
HHMI Investigator, and Wylie Vale Chair, The Salk Institute for Biological Studies, SNL-KT
May 16, 2022

How does our brain rapidly determine if something is good or bad? How do we know our place within a social group? How do we know how to behave appropriately in dynamic environments with ever-changing conditions? The Tye Lab is interested in understanding how neural circuits important for driving positive and negative motivational valence (seeking pleasure or avoiding punishment) are anatomically, genetically and functionally arranged. We study the neural mechanisms that underlie a wide range of behaviors ranging from learned to innate, including social, feeding, reward-seeking and anxiety-related behaviors. We have also become interested in “social homeostasis” -- how our brains establish a preferred set-point for social contact, and how this maintains stability within a social group. How are these circuits interconnected with one another, and how are competing mechanisms orchestrated on a neural population level? We employ optogenetic, electrophysiological, electrochemical, pharmacological and imaging approaches to probe these circuits during behavior.

SeminarNeuroscienceRecording

Time as a continuous dimension in natural and artificial networks

Marc Howard
Boston University
May 3, 2022

Neural representations of time are central to our understanding of the world around us. I review cognitive, neurophysiological and theoretical work that converges on three simple ideas. First, the time of past events is remembered via populations of neurons with a continuum of functional time constants. Second, these time constants evenly tile the log time axis. This results in a neural Weber-Fechner scale for time which can support behavioral Weber-Fechner laws and characteristic behavioral effects in memory experiments. Third, these populations appear as dual pairs---one type of population contains cells that change firing rate monotonically over time and a second type of population that has circumscribed temporal receptive fields. These ideas can be used to build artificial neural networks that have novel properties. Of particular interest, a convolutional neural network built using these principles can generalize to arbitrary rescaling of its inputs. That is, after learning to perform a classification task on a time series presented at one speed, it successfully classifies stimuli presented slowed down or sped up. This result illustrates the point that this confluence of ideas originating in cognitive psychology and measured in the mammalian brain could have wide-reaching impacts on AI research.

SeminarCognitionRecording

Eliminativism about Neural Representation

Inês Hipólito
Humboldt-Universität zu Berlin, Berlin School of Mind and Brain
Apr 11, 2022
SeminarNeuroscienceRecording

Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations

Yul Kang
Lengyel lab, Cambridge University
Apr 5, 2022

To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.

SeminarNeuroscienceRecording

Visualization and manipulation of our perception and imagery by BCI

Takufumi Yanagisawa
Osaka University
Mar 31, 2022

We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.

SeminarNeuroscienceRecording

NMC4 Short Talk: Neural Representation: Bridging Neuroscience and Philosophy

Andrew Richmond (he/him)
Columbia University
Dec 1, 2021

We understand the brain in representational terms. E.g., we understand spatial navigation by appealing to the spatial properties that hippocampal cells represent, and the operations hippocampal circuits perform on those representations (Moser et al., 2008). Philosophers have been concerned with the nature of representation, and recently neuroscientists entered the debate, focusing specifically on neural representations. (Baker & Lansdell, n.d.; Egan, 2019; Piccinini & Shagrir, 2014; Poldrack, 2020; Shagrir, 2001). We want to know what representations are, how to discover them in the brain, and why they matter so much for our understanding of the brain. Those questions are framed in a traditional philosophical way: we start with explanations that use representational notions, and to more deeply understand those explanations we ask, what are representations — what is the definition of representation? What is it for some bit of neural activity to be a representation? I argue that there is an alternative, and much more fruitful, approach. Rather than asking what representations are, we should ask what the use of representational *notions* allows us to do in neuroscience — what thinking in representational terms helps scientists do or explain. I argue that this framing offers more fruitful ground for interdisciplinary collaboration by distinguishing the philosophical concerns that have a place in neuroscience from those that don’t (namely the definitional or metaphysical questions about representation). And I argue for a particular view of representational notions: they allow us to impose the structure of one domain onto another as a model of its causal structue. So, e.g., thinking about the hippocampus as representing spatial properties is a way of taking structures in those spatial properties, and projecting those structures (and algorithms that would implement them) them onto the brain as models of its causal structure.

SeminarNeuroscienceRecording

NMC4 Short Talk: Sensory intermixing of mental imagery and perception

Nadine Dijkstra
Wellcome Centre for Human Neuroimaging
Dec 1, 2021

Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.

SeminarNeuroscienceRecording

NMC4 Short Talk: Untangling Contributions of Distinct Features of Images to Object Processing in Inferotemporal Cortex

Hanxiao Lu
Yale University
Nov 30, 2021

How do humans perceive daily objects of various features and categorize these seemingly intuitive and effortless mental representations? Prior literature focusing on the role of the inferotemporal region (IT) has revealed object category clustering that is consistent with the semantic predefined structure (superordinate, ordinate, subordinate). It has however been debated whether the neural signals in the IT regions are a reflection of such categorical hierarchy [Wen et al.,2018; Bracci et al., 2017]. Visual attributes of images that correlated with semantic and category dimensions may have confounded these prior results. Our study aimed to address this debate by building and comparing models using the DNN AlexNet, to explain the variance in representational dissimilarity matrix (RDM) of neural signals in the IT region. We found that mid and high level perceptual attributes of the DNN model contribute the most to neural RDMs in the IT region. Semantic categories, as in predefined structure, were moderately correlated with mid to high DNN layers (r = [0.24 - 0.36]). Variance partitioning analysis also showed that the IT neural representations were mostly explained by DNN layers, while semantic categorical RDMs brought little additional information. In light of these results, we propose future works should focus more on the specific role IT plays in facilitating the extraction and coding of visual features that lead to the emergence of categorical conceptualizations.

SeminarNeuroscienceRecording

Neural representations of space in the hippocampus of a food-caching bird

Hannah Payne
Aronov lab, Columbia University
Nov 30, 2021

Spatial memory in vertebrates requires brain regions homologous to the mammalian hippocampus. Between vertebrate clades, however, these regions are anatomically distinct and appear to produce different spatial patterns of neural activity. We asked whether hippocampal activity is fundamentally different even between distant vertebrates that share a strong dependence on spatial memory. We studied tufted titmice – food-caching birds capable of remembering many concealed food locations. We found mammalian-like neural activity in the titmouse hippocampus, including sharp-wave ripples and anatomically organized place cells. In a non-food-caching bird species, spatial firing was less informative and was exhibited by fewer neurons. These findings suggest that hippocampal circuit mechanisms are similar between birds and mammals, but that the resulting patterns of activity may vary quantitatively with species-specific ethological needs.

SeminarPsychology

Age-related dedifferentiation across representational levels and their relation to memory performance

Malte Kobelt
Ruhr-University Bochum
Oct 6, 2021

Episodic memory performance decreases with advancing age. According to theoretical models, such memory decline might be a consequence of age-related reductions in the ability to form distinct neural representations of our past. In this talk, I want to present our new age-comparative fMRI study investigating age-related neural dedifferentiation across different representational levels. By combining univariate analyses and searchlight pattern similarity analyses, we found that older adults show reduced category selective processing in higher visual areas, less specific item representations in occipital regions and less stable item representations. Dedifferentiation on all these representational levels was related to memory performance, with item specificity being the strongest contributor. Overall, our results emphasize that age-related dedifferentiation can be observed across the entire cortical hierarchy which may selectively impair memory performance depending on the memory task.

SeminarNeuroscienceRecording

Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise

Monty Escabi
University of Connecticut
Sep 30, 2021

Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.

SeminarNeuroscienceRecording

Expectation of self-generated sounds drives predictive processing in mouse auditory cortex

Nick Audette
Schneider lab, New York University
Sep 21, 2021

Sensory stimuli are often predictable consequences of one’s actions, and behavior exerts a correspondingly strong influence over sensory responses in the brain. Closed-loop experiments with the ability to control the sensory outcomes of specific animal behaviors have revealed that neural responses to self-generated sounds are suppressed in the auditory cortex, suggesting a role for prediction in local sensory processing. However, it is unclear whether this phenomenon derives from a precise movement-based prediction or how it affects the neural representation of incoming stimuli. We address these questions by designing a behavioral paradigm where mice learn to expect the predictable acoustic consequences of a simple forelimb movement. Neuronal recordings from auditory cortex revealed suppression of neural responses that was strongest for the expected tone and specific to the time of the sound-associated movement. Predictive suppression in the auditory cortex was layer-specific, preceded by the arrival of movement information, and unaffected by behavioral relevance or reward association. These findings illustrate that expectation, learned through motor-sensory experience, drives layer-specific predictive processing in the mouse auditory cortex.

SeminarNeuroscience

Behavioral and neurobiological mechanisms of social cooperation

Yina Ma
Beijing Normal University
Jun 29, 2021

Human society operates on large-scale cooperation and shared norms of fairness. However, individual differences in cooperation and incentives to free-riding on others’ cooperation make large-scale cooperation fragile and can lead to reduced social-welfare. Deciphering the neural codes representing potential rewards/costs for self and others is crucial for understanding social decision-making and cooperation. I will first talk about how we integrate computational modeling with functional magnetic resonance imaging to investigate the neural representation of social value and the modulation by oxytocin, a nine-amino acid neuropeptide, in participants evaluating monetary allocations to self and other (self-other allocations). Then I will introduce our recent studies examining the neurobiological mechanisms underlying intergroup decision-making using hyper-scanning, and share with you how we alter intergroup decisions using psychological manipulations and pharmacological challenge. Finally, I will share with you our on-going project that reveals how individual cooperation spreads through human social networks. Our results help to better understand the neurocomputational mechanism underlying interpersonal and intergroup decision-making.

SeminarNeuroscienceRecording

Higher cognitive resources for efficient learning

Aurelio Cortese
ATR
Jun 17, 2021

A central issue in reinforcement learning (RL) is the ‘curse-of-dimensionality’, arising when the degrees-of-freedom are much larger than the number of training samples. In such circumstances, the learning process becomes too slow to be plausible. In the brain, higher cognitive functions (such as abstraction or metacognition) may be part of the solution by generating low dimensional representations on which RL can operate. In this talk I will discuss a series of studies in which we used functional magnetic resonance imaging (fMRI) and computational modeling to investigate the neuro-computational basis of efficient RL. We found that people can learn remarkably complex task structures non-consciously, but also that - intriguingly - metacognition appears tightly coupled to this learning ability. Furthermore, when people use an explicit (conscious) policy to select relevant information, learning is accelerated by abstractions. At the neural level, prefrontal cortex subregions are differentially involved in separate aspects of learning: dorsolateral prefrontal cortex pairs with metacognitive processes, while ventromedial prefrontal cortex with valuation and abstraction. I will discuss the implications of these findings, in particular new questions on the function of metacognition in adaptive behavior and the link with abstraction.

SeminarPsychology

Getting to know you: emerging neural representations during face familiarization

Gyula Kovács
Friedrich-Schiller University Jena
Jun 16, 2021

The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2) and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.

SeminarNeuroscienceRecording

Deciding to stop deciding: A cortical-subcortical circuit for forming and terminating a decision

Michael Shadlen
Columbia University
Jun 9, 2021

The neurobiology of decision-making is informed by neurons capable of representing information over time scales of seconds. Such neurons were initially characterized in studies of spatial working memory, motor planning (e.g., Richard Andersen lab) and spatial attention. For decision-making, such neurons emit graded spike rates, that represent the accumulated evidence for or against a choice. They establish the conduit between the formation of the decision and its completion, usually in the form of a commitment to an action, even if provisional. Indeed, many decisions appear to arise through an accumulation of noisy samples of evidence to a terminating threshold, or bound. Previous studies show that single neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence when monkeys make decisions about the direction of random dot motion (RDM) and express their decision with a saccade to the neuron’s preferred target. The mechanism of termination (the bound) is elusive. LIP is interconnected with other brain regions that also display decision-related activity. Whether these areas play roles in the decision process that are similar to or fundamentally different from that of LIP is unclear. I will present new unpublished experiments that begin to resolve these issues by recording from populations of neurons simultaneously in LIP and one of its primary targets, the superior colliculus (SC), while monkeys make difficult perceptual decisions.

SeminarNeuroscience

Learning to perceive with new sensory signals

Marko Nardini
Durham University
May 18, 2021

I will begin by describing recent research taking a new, model-based approach to perceptual development. This approach uncovers fundamental changes in information processing underlying the protracted development of perception, action, and decision-making in childhood. For example, integration of multiple sensory estimates via reliability-weighted averaging – widely used by adults to improve perception – is often not seen until surprisingly late into childhood, as assessed by both behaviour and neural representations. This approach forms the basis for a newer question: the scope for the nervous system to deploy useful computations (e.g. reliability-weighted averaging) to optimise perception and action using newly-learned sensory signals provided by technology. Our initial model system is augmenting visual depth perception with devices translating distance into auditory or vibro-tactile signals. This problem has immediate applications to people with partial vision loss, but the broader question concerns our scope to use technology to tune in to any signal not available to our native biological receptors. I will describe initial progress on this problem, and our approach to operationalising what it might mean to adopt a new signal comparably to a native sense. This will include testing for its integration (weighted averaging) alongside the native senses, assessing the level at which this integration happens in the brain, and measuring the degree of ‘automaticity’ with which new signals are used, compared with native perception.

SeminarNeuroscienceRecording

Exploring the neural landscape of imagination and abstract spaces

Daniela Schiller
Mount Sinai
Apr 22, 2021

External cues imbued with significance can enhance the motivational state of an organism, trigger related memories and influence future planning and goal directed behavior. At the same time, internal thought and imaginings can moderate and counteract the impact of external motivational cues. The neural underpinnings of imagination have been largely opaque, due to the inherent inaccessibility of mental actions. The talk will describe studies utilizing imagination and tracking how its neural correlates bidirectionally interact with external motivational cues. Stimulus-response associative learning is only one form of memory organization. A more comprehensive and efficient organizational principal is the cognitive map. In the last part of the talk we will examine this concept in the case of abstract memories and social space. Social encounters provide opportunities to become intimate or estranged from others and to gain or lose power over them. The locations of others on the axes of power and affiliation can serve as reference points for our own position in the social space. Research is beginning to uncover the spatial-like neural representation of these social coordinates. We will discuss recent and growing evidence on utilizing the principals of the cognitive map across multiple domains, providing a systematic way of organizing memories to navigate life.

SeminarNeuroscienceRecording

Restless engrams: the origin of continually reconfiguring neural representations

Timothy O'Leary
University of Cambridge
Mar 4, 2021

During learning, populations of neurons alter their connectivity and activity patterns, enabling the brain to construct a model of the external world. Conventional wisdom holds that the durability of a such a model is reflected in the stability of neural responses and the stability of synaptic connections that form memory engrams. However, recent experimental findings have challenged this idea, revealing that neural population activity in circuits involved in sensory perception, motor planning and spatial memory continually change over time during familiar behavioural tasks. This continual change suggests significant redundancy in neural representations, with many circuit configurations providing equivalent function. I will describe recent work that explores the consequences of such redundancy for learning and for task representation. Despite large changes in neural activity, we find cortical responses in sensorimotor tasks admit a relatively stable readout at the population level. Furthermore, we find that redundancy in circuit connectivity can make a task easier to learn and compensate for deficiencies in biological learning rules. Finally, if neuronal connections are subject to an unavoidable level of turnover, the level of plasticity required to optimally maintain a memory is generally lower than the total change due to turnover itself, predicting continual reconfiguration of an engram.

SeminarNeuroscience

Neural representation of pose and movement in parietal cortex and beyond

Jonathan Whitlock
Kavli Institute for Systems Neuroscience
Mar 2, 2021

Jonathan Whitlock is an associate professor of neuroscience at the Kavli Institute for Systems Neuroscience in Trondheim, Norway. His group combines high-density single-unit recordings with silicone probes and sub-millimeter 3D tracking to study the cortical representation of pose and movement in freely behaving rats. The lecture will introduce his group’s work on neural tuning to pose and movement parietal and motor areas, and will include more recent findings from primary visual, auditory and somatosensory areas

SeminarNeuroscienceRecording

Linking dimensionality to computation in neural networks

Stefano Recanatesi
University of Washington
Dec 22, 2020

The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.

SeminarNeuroscience

Top-down Modulation in Human Visual Cortex

Mohamed Abdelhack
Washington University in St. Louis
Dec 16, 2020

Human vision flaunts a remarkable ability to recognize objects in the surrounding environment even in the absence of complete visual representation of these objects. This process is done almost intuitively and it was not until scientists had to tackle this problem in computer vision that they noticed its complexity. While current advances in artificial vision systems have made great strides exceeding human level in normal vision tasks, it has yet to achieve a similar robustness level. One cause of this robustness is the extensive connectivity that is not limited to a feedforward hierarchical pathway similar to the current state-of-the-art deep convolutional neural networks but also comprises recurrent and top-down connections. They allow the human brain to enhance the neural representations of degraded images in concordance with meaningful representations stored in memory. The mechanisms by which these different pathways interact are still not understood. In this seminar, studies concerning the effect of recurrent and top-down modulation on the neural representations resulting from viewing blurred images will be presented. Those studies attempted to uncover the role of recurrent and top-down connections in human vision. The results presented challenge the notion of predictive coding as a mechanism for top-down modulation of visual information during natural vision. They show that neural representation enhancement (sharpening) appears to be a more dominant process of different levels of visual hierarchy. They also show that inference in visual recognition is achieved through a Bayesian process between incoming visual information and priors from deeper processing regions in the brain.

SeminarNeuroscienceRecording

Linking neural representations of space by multiple attractor networks in the entorhinal cortex and the hippocampus

Yoram Burak
Hebrew University
Dec 8, 2020

In the past decade evidence has accumulated in favor of the hypothesis that multiple sub-networks in the medial entorhinal cortex (MEC) are characterized by low-dimensional, continuous attractor dynamics. Much has been learned about the joint activity of grid cells within a module (a module consists of grid cells that share a common grid spacing), but little is known about the interactions between them. Under typical conditions of spatial exploration in which sensory cues are abundant, all grid-cells in the MEC represent the animal’s position in space and their joint activity lies on a two-dimensional manifold. However, if the grid cells in a single module mechanistically constitute independent attractor networks, then under conditions in which salient sensory cues are absent, errors could accumulate in the different modules in an uncoordinated manner. Such uncoordinated errors would give rise to catastrophic readout errors when attempting to decode position from the joint grid-cell activity. I will discuss recent theoretical works from our group, in which we explored different mechanisms that could impose coordination in the different modules. One of these mechanisms involves coordination with the hippocampus and must be set up such that it operates across multiple spatial maps that represent different environments. The other mechanism is internal to the entorhinal cortex and independent of the hippocampus.

SeminarNeuroscienceRecording

A function approximation perspective on neural representations

Cengiz Pehlevan
Harvard University
Dec 1, 2020

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.

SeminarNeuroscienceRecording

The Gist of False Memory

Shaul Hochstein
Hebrew University
Nov 23, 2020

It has long been known that when viewing a set of images, we misjudge individual elements as being closer to the mean than they are (Hollingworth, 1910) and recall seeing the (absent) set mean (Deese, 1959; Roediger & McDermott (1995). Recent studies found that viewing sets of images, simultaneously or sequentially, leads to perception of set statistics (mean, range) with poor memory for individual elements. Ensemble perception was found for sets of simple images (e.g. circles varying in size or brightness; lines of varying orientation), complex objects (e.g. faces of varying emotion), as well as for objects belonging to the same category. When the viewed set does not include its mean or prototype, nevertheless, observers report and act as if they have seen this central image or object – a form of false memory. Physiologically, detailed sensory information at cortical input levels is processed hierarchically to form an integrated scene gist at higher levels. However, we are aware of the gist before the details. We propose that images and objects belonging to a set or category are represented as their gist, mean or prototype, plus individual differences from that gist. Under constrained viewing conditions, only the gist is perceived and remembered. This theory also provides a basis for compressed neural representation. Extending this theory to scenes and episodes supplies a generalized basis for false memories. They seem right, match generalized expectations, so are believable without challenging examination. This theory could be tested by analyzing the typicality of false memories, compared to rejected alternatives.

SeminarNeuroscienceRecording

The geometry of abstraction in hippocampus and pre-frontal cortex

Stefano Fusi
Columbia University
Oct 15, 2020

The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. Here we characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.

SeminarNeuroscienceRecording

Geometry of Neural Computation Unifies Working Memory and Planning

John D. Murray
Yale University School of Medicine
Jun 17, 2020

Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.

SeminarNeuroscienceRecording

The geometry of abstraction in artificial and biological neural networks

Stefano Fusi
Columbia University
Jun 10, 2020

The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.

ePoster

Human-like Behavior and Neural Representations Emerge in a Goal-driven Model of Overt Visual Search for Natural Objects

Motahareh Pourrahimi, Irina Rish, Pouya Bashivan

Bernstein Conference 2024

ePoster

A new approach to inferring the eigenspectra of high-dimensional neural representations

COSYNE 2022

ePoster

Interpretable behavioral features have conserved neural representations across mice

COSYNE 2022

ePoster

Interpretable behavioral features have conserved neural representations across mice

COSYNE 2022

ePoster

Learning-to-learn emerges from learning to efficiently reuse neural representations

COSYNE 2022

ePoster

Learning-to-learn emerges from learning to efficiently reuse neural representations

COSYNE 2022

ePoster

Neural Representation of Hand Gestures in Human Premotor Cortex

COSYNE 2022

ePoster

Neural Representation of Hand Gestures in Human Premotor Cortex

COSYNE 2022

ePoster

Neural Representations of Opponent Strategy Support the Adaptive Behavior of Recurrent Actor-Critics in a Competitive Game

COSYNE 2022

ePoster

Neural Representations of Opponent Strategy Support the Adaptive Behavior of Recurrent Actor-Critics in a Competitive Game

COSYNE 2022

ePoster

Optimization of error distributions as a design principle for neural representations

COSYNE 2022

ePoster

Optimization of error distributions as a design principle for neural representations

COSYNE 2022

ePoster

Perceptual and neural representations of naturalistic texture information in developing monkeys

COSYNE 2022

ePoster

Perceptual and neural representations of naturalistic texture information in developing monkeys

COSYNE 2022

ePoster

Compact neural representations in co-adaptive Brain-Computer Interfaces

Pavithra Rajeswaran, Alexandre Payeur, Guillaume Lajoie, Amy L. Orsborn

COSYNE 2023

ePoster

Learning predictive neural representations by straightening natural videos

Xueyan Niu, Cristina Savin, Eero Simoncelli

COSYNE 2023

ePoster

The neural representation of perceptual uncertainty in mouse visual cortex

Theoklitos Amvrosiadis, Ádám Koblinger, David Liu, Nathalie Rochefort, Máté Lengyel

COSYNE 2023

ePoster

Neural representation and predictive processing of dynamic visual signals

Pierre-Étienne Fiquet & Eero Simoncelli

COSYNE 2023

ePoster

An optofluidic platform for interrogating chemosensory behavior and brainwide neural representation

Kwun Hei Samuel Sy, Yu Hu, Danny C. W. Chan, Roy C. H. Chan, Jing Lyu, Zhongqi Li, Kenneth K. Y. Wong, Chung Hang Jonathan Choi, Vincent C. T. Mok, Hei-Ming Lai, Owen Randlett, Ho Ko

COSYNE 2023

ePoster

Stable geometry is inevitable in drifting neural representations

Evan Schaffer

COSYNE 2023

ePoster

Contribution of task-irrelevant stimuli to drift of neural representations

Farhad Pashakhanloo

COSYNE 2025

ePoster

Examining the impact of biomechanical actuation on neural representations for embodied control

Eric Leonardis, Ava Barbano, Yuanjia Yang, Akira Nagamori, Jason Foat, Jesse Gilmer, Mazen Al Borno, Eiman Azim, Talmo Pereira

COSYNE 2025

ePoster

Human-like behavior and neural representations emerge in a neural network trained to overtly search for objects in natural scenes from pixels

Motahareh Pourrahimi, Irina Rish, Pouya Bashivan

COSYNE 2025

ePoster

Neural representation of color in the pigeon visual Wulst

Ann Kotkat, Simon Nimpf, Andreas Genewsky, David A. Keays, Laura Busse

COSYNE 2025

ePoster

The Neural Representation of Mood in the Primate Insula

Nicole Rust, You-Ping Yang, Veit Stuphorn

COSYNE 2025

ePoster

Neural representations of real world places and societies

Saikat Ray, Shaked Palgi, Liora Las, Nachum Ulanovsky

COSYNE 2025

ePoster

The neural representations of spatial subgoals

Jasmine Reggiani, Laurence Freeman, Dario Campagner, Tiago Branco

COSYNE 2025

ePoster

Probing the dynamics of neural representations that support generalization under continual learning

Daniel Kimmel, Kimberly Stachenfeld, Nikolaus Kriegeskorte, Stefano Fusi, C Daniel Salzman, Daphna Shohamy

COSYNE 2025

ePoster

SIMPL: Scalable and hassle-free optimisation of neural representations from behaviour

Tom George, Pierre Glaser, Kimberly Stachenfeld, Caswell Barry, Claudia Clopath

COSYNE 2025

ePoster

Brain-distributed neural representation of timing behaviour

Melissa Serrano, Manfredi Castelli, Andrew Sharott, David Dupret

FENS Forum 2024

ePoster

Distributed memory engrams underlie flexible and versatile neural representations

Douglas Feitosa Tomé, Tim Vogels

FENS Forum 2024

ePoster

Effector-specific neural representations of perceptual choices across different sensory domains

Marlon Esmeyer, Timo Torsten Schmidt, Felix Blankenburg

FENS Forum 2024

ePoster

Neural representation of color in the pigeon brain

Simon Nimpf, Harris S. Kaplan, Laura Busse, David A. Keays

FENS Forum 2024

ePoster

Neural representation of conspecifics in the pigeon's visual stream

Sara Silva, William Clark, Jonas Rose, Michael Colombo

FENS Forum 2024

ePoster

Neural representation of food presence – and absence – in larval zebrafish

Lílian de Sardenberg Schmid, Drew N. Robson, Jennifer M. Li

FENS Forum 2024

ePoster

Neural representation of social conspecifics during freely moving behavior

Cristina Mazuski, Lennart Oettl, Chenyue Ren, John O'Keefe

FENS Forum 2024

ePoster

Neural representations underlying self and conspecific action-outcomes during joint decision-making in mice

Anas Masood, Ozge Sayin, Sami El-Boustani

FENS Forum 2024