← Back

Neural Code

Topic spotlight
TopicWorld Wide

neural code

Discover seminars, jobs, and research tagged with neural code across World Wide.
40 curated items30 Seminars9 ePosters1 Position
Updated 1 day ago
40 items · neural code
40 results
Position

Mario Dipoppa

University of California, Los Angeles
UCLA campus
Dec 5, 2025

The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code.

SeminarNeuroscience

Neural codes for natural behaviors in the hippocampus of flying bat

Nachum Ulanovsky
Weizmann Institute of Science, Israël
Apr 14, 2024
SeminarNeuroscienceRecording

Convex neural codes in recurrent networks and sensory systems

Vladimir Itskov
The Pennsylvania State University
Dec 13, 2022

Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscience

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll
Stanford University
Aug 24, 2022

Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.

SeminarNeuroscienceRecording

Efficient Random Codes in a Shallow Neural Network

Rava Azeredo da Silveira
French National Centre for Scientific Research (CNRS), Paris
Jun 14, 2022

Efficient coding has served as a guiding principle in understanding the neural code. To date, however, it has been explored mainly in the context of peripheral sensory cells with simple tuning curves. By contrast, ‘deeper’ neurons such as grid cells come with more complex tuning properties which imply a different, yet highly efficient, strategy for representing information. I will show that a highly efficient code is not specific to a population of neurons with finely tuned response properties: it emerges robustly in a shallow network with random synapses. Here, the geometry of population responses implies that optimality obtains from a tradeoff between two qualitatively different types of error: ‘local’ errors (common to classical neural population codes) and ‘global’ (or ‘catastrophic’) errors. This tradeoff leads to efficient compression of information from a high-dimensional representation to a low-dimensional one. After describing the theoretical framework, I will use it to re-interpret recordings of motor cortex in behaving monkey. Our framework addresses the encoding of (sensory) information; if time allows, I will comment on ongoing work that focuses on decoding from the perspective of efficient coding.

SeminarNeuroscienceRecording

Context-dependent motion processing in the retina

Wei Wei
University of Chicago
Jun 7, 2022

A critical function of sensory systems is to reliably extract ethologically relevant features from the complex natural environment. A classic model to study feature detection is the direction-selective circuit of the mammalian retina. In this talk, I will discuss our recent work on how visual contexts dynamically influence the neural processing of motion signals in the direction-selective circuit in the mouse retina.

SeminarNeuroscienceRecording

The Standard Model of the Retina

Markus Meister
Caltech
May 24, 2022

The science of the retina has reached an interesting stage of completion. There exists now a consensus standard model of this neural system - at least in the minds of many researchers - that serves as a baseline against which to evaluate new claims. The standard model links phenomena from molecular biophysics, cell biology, neuroanatomy, synaptic physiology, circuit function, and visual psychophysics. It is further supported by a normative theory explaining what the purpose is of processing visual information this way. Most new reports of retinal phenomena fit squarely within the standard model, and major revisions seem increasingly unlikely. Given that our understanding of other brain circuits with comparable complexity is much more rudimentary, it is worth considering an example of what success looks like. In this talk I will summarize what I think are the ingredients that led to this mature understanding of the retina. Equally important, a number of practices and concepts that are currently en vogue in neuroscience were not needed or indeed counterproductive. I look forward to debating how these lessons might extend to other areas of brain research.

SeminarNeuroscience

Geometry of sequence working memory in macaque prefrontal cortex

Nikita Otstavnov
HSE University
Apr 20, 2022

How the brain stores a sequence in memory remains largely unknown. We investigated the neural code underlying sequence working memory using two-photon calcium imaging to record thousands of neurons in the prefrontal cortex of macaque monkeys memorizing and then reproducing a sequence of locations after a delay. We discovered a regular geometrical organization: The high-dimensional neural state space during the delay could be decomposed into a sum of low-dimensional subspaces, each storing the spatial location at a given ordinal rank, which could be generalized to novel sequences and explain monkey behavior. The rank subspaces were distributed across large overlapping neural groups, and the integration of ordinal and spatial information occurred at the collective level rather than within single neurons. Thus, a simple representational geometry underlies sequence working memory.

SeminarNeuroscienceRecording

Retinal responses to natural inputs

Fred Rieke
University of Washington
Apr 17, 2022

The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.

SeminarNeuroscience

Neural Codes for Natural Behaviors in Flying Bats

Nachum Ulanovsky
Weizmann Institute
Jan 12, 2022

This talk will focus on the importance of using natural behaviors in neuroscience research – the “Natural Neuroscience” approach. I will illustrate this point by describing studies of neural codes for spatial behaviors and social behaviors, in flying bats – using wireless neurophysiology methods that we developed – and will highlight new neuronal representations that we discovered in animals navigating through 3D spaces, or in very large-scale environments, or engaged in social interactions. In particular, I will discuss: (1) A multi-scale neural code for very large environments, which we discovered in bats flying in a 200-meter long tunnel. This new type of neural code is fundamentally different from spatial codes reported in small environments – and we show theoretically that it is superior for representing very large spaces. (2) Rapid modulation of position × distance coding in the hippocampus during collision-avoidance behavior between two flying bats. This result provides a dramatic illustration of the extreme dynamism of the neural code. (3) Local-but-not-global order in 3D grid cells – a surprising experimental finding, which can be explained by a simple physics-inspired model, which successfully describes both 3D and 2D grids. These results strongly argue against many of the classical, geometrically-based models of grid cells. (4) I will also briefly describe new results on the social representation of other individuals in the hippocampus, in a highly social multi-animal setting. The lecture will propose that neuroscience experiments – in bats, rodents, monkeys or humans – should be conducted under evermore naturalistic conditions.

SeminarNeuroscienceRecording

The organization of neural representations for control

David Badre
Brown University
Dec 9, 2021

Cognitive control allows us to think and behave flexibly based on our context and goals. Most theories of cognitive control propose a control representation that enables the same input to produce different outputs contingent on contextual factors. In this talk, I will focus on an important property of the control representation's neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. This tradeoff has important implications for cognitive control. In this talk, I will present initial evidence from fMRI and EEG showing that task representations in the human brain leverage both ends of this tradeoff during flexible behavior.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermundstad
Janelia
Nov 18, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscience

When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?

Eric Shea-Brown
Department of Applied Mathematics, University of Washington
Nov 17, 2021

There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.

SeminarNeuroscienceRecording

Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise

Monty Escabi
University of Connecticut
Sep 30, 2021

Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.

SeminarNeuroscience

Behavioral and neurobiological mechanisms of social cooperation

Yina Ma
Beijing Normal University
Jun 29, 2021

Human society operates on large-scale cooperation and shared norms of fairness. However, individual differences in cooperation and incentives to free-riding on others’ cooperation make large-scale cooperation fragile and can lead to reduced social-welfare. Deciphering the neural codes representing potential rewards/costs for self and others is crucial for understanding social decision-making and cooperation. I will first talk about how we integrate computational modeling with functional magnetic resonance imaging to investigate the neural representation of social value and the modulation by oxytocin, a nine-amino acid neuropeptide, in participants evaluating monetary allocations to self and other (self-other allocations). Then I will introduce our recent studies examining the neurobiological mechanisms underlying intergroup decision-making using hyper-scanning, and share with you how we alter intergroup decisions using psychological manipulations and pharmacological challenge. Finally, I will share with you our on-going project that reveals how individual cooperation spreads through human social networks. Our results help to better understand the neurocomputational mechanism underlying interpersonal and intergroup decision-making.

SeminarNeuroscienceRecording

Neural codes in early sensory areas maximize fitness

Todd Hare
University of Zürich
May 12, 2021

It has generally been presumed that sensory information encoded by a nervous system should be as accurate as its biological limitations allow. However, perhaps counter intuitively, accurate representations of sensory signals do not necessarily maximize the organism’s chances of survival. We show that neural codes that maximize reward expectation—and not accurate sensory representations—account for retinal responses in insects, and retinotopically-specific adaptive codes in humans. Thus, our results provide evidence that fitness-maximizing rules imposed by the environment are applied at the earliest stages of sensory processing.

SeminarNeuroscience

A neural code for vocal production in a social fruit-bat

Julie Elie
University of California, Berkeley
May 9, 2021
SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermunstad
Janelia Research Campus
May 4, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscience

Locally-ordered representation of 3D space in the entorhinal cortex

Gily Ginosar
Ulanovsky lab, Weizmann Institute, Rehovot, Israel
Apr 28, 2021

When animals navigate on a two-dimensional (2D) surface, many neurons in the medial entorhinal cortex (MEC) are activated as the animal passes through multiple locations (‘firing fields’) arranged in a hexagonal lattice that tiles the locomotion-surface; these neurons are known as grid cells. However, although our world is three-dimensional (3D), the 3D volumetric representation in MEC remains unknown. Here we recorded MEC cells in freely-flying bats and found several classes of spatial neurons, including 3D border cells, 3D head-direction cells, and neurons with multiple 3D firing-fields. Many of these multifield neurons were 3D grid cells, whose neighboring fields were separated by a characteristic distance – forming a local order – but these cells lacked any global lattice arrangement of their fields. Thus, while 2D grid cells form a global lattice – characterized by both local and global order – 3D grid cells exhibited only local order, thus creating a locally ordered metric for space. We modeled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D – thus describing both 2D and 3D grid cells using one unifying model. Together, these data and model illuminate the fundamental differences and similarities between neural codes for 3D and 2D space in the mammalian brain.

SeminarNeuroscienceRecording

The Dark Side of Vision: Resolving the Neural Code

Petri Ala-Laurila
Aalto University
Apr 5, 2021

All sensory information – like what we see, hear and smell – gets encoded in spike trains by sensory neurons and gets sent to the brain. Due to the complexity of neural circuits and the difficulty of quantifying complex animal behavior, it has been exceedingly hard to resolve how the brain decodes these spike trains to drive behavior. We now measure quantal signals originating from sparse photons through the most sensitive neural circuits of the mammalian retina and correlate the retinal output spike trains with precisely quantified behavioral decisions. We utilize a combination of electrophysiological measurements on the most sensitive ON and OFF retinal ganglion cell types and a novel deep-learning based tracking technology of the head and body positions of freely-moving mice. We show that visually-guided behavior relies on information from the retinal ON pathway for the dimmest light increments and on information from the retinal OFF pathway for the dimmest light decrements (“quantal shadows”). Our results show that the distribution of labor between ON and OFF pathways starts already at starlight supporting distinct pathway-specific visual computations to drive visually-guided behavior. These results have several fundamental consequences for understanding how the brain integrates information across parallel information streams as well as for understanding the limits of sensory signal processing. In my talk, I will discuss some of the most eminent consequences including the extension of this “Quantum Behavior” paradigm from mouse vision to monkey and human visual systems.

SeminarNeuroscienceRecording

Inhibitory neural circuit mechanisms underlying neural coding of sensory information in the neocortex

Jeehyun Kwag
Korea University
Jan 28, 2021

Neural codes, such as temporal codes (precisely timed spikes) and rate codes (instantaneous spike firing rates), are believed to be used in encoding sensory information into spike trains of cortical neurons. Temporal and rate codes co-exist in the spike train and such multiplexed neural code-carrying spike trains have been shown to be spatially synchronized in multiple neurons across different cortical layers during sensory information processing. Inhibition is suggested to promote such synchronization, but it is unclear whether distinct subtypes of interneurons make different contributions in the synchronization of multiplexed neural codes. To test this, in vivo single-unit recordings from barrel cortex were combined with optogenetic manipulations to determine the contributions of parvalbumin (PV)- and somatostatin (SST)-positive interneurons to synchronization of precisely timed spike sequences. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (<12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contribution to feedforward and feedback inhibition, respectively. Overall, these results show that PV and SST interneurons have distinct frequency (rate code)-selective roles in dynamically gating the synchronization of spike times (temporal code) through preferentially recruiting feedforward and feedback inhibitory circuit motifs. The inhibitory neural circuit mechanisms we uncovered here his may have critical roles in regulating neural code-based somatosensory information processing in the neocortex.

SeminarNeuroscienceRecording

On the purpose and origin of spontaneous neural activity

Tim Vogels
IST Austria
Sep 3, 2020

Spontaneous firing, observed in many neurons, is often attributed to ion channel or network level noise. Cortical cells during slow wave sleep exhibit transitions between so called Up and Down states. In this sleep state, with limited sensory stimuli, neurons fire in the Up state. Spontaneous firing is also observed in slices of cholinergic interneurons, cerebellar Purkinje cells and even brainstem inspiratory neurons. In such in vitro preparations, where the functional relevance is long lost, neurons continue to display a rich repertoire of firing properties. It is perplexing that these neurons, instead of saving their energy during information downtime and functional irrelevance, are eager to fire. We propose that spontaneous firing is not a chance event but instead, a vital activity for the well-being of a neuron. We postulate that neurons, in anticipation of synaptic inputs, keep their ATP levels at maximum. As recovery from inputs requires most of the energy resources, neurons are ATP surplus and ADP scarce during synaptic quiescence. With ADP as the rate-limiting step, ATP production stalls in the mitochondria when ADP is low. This leads to toxic Reactive Oxygen Species (ROS) formation, which are known to disrupt many cellular processes. We hypothesize that spontaneous firing occurs at these conditions - as a release valve to spend energy and to restore ATP production, shielding the neuron against ROS. By linking a mitochondrial metabolism model to a conductance-based neuron model, we show that spontaneous firing depends on baseline ATP usage and on ATP-cost-per-spike. From our model, emerges a mitochondrial mediated homeostatic mechanism that provides a recipe for different firing patterns. Our findings, though mostly affecting intracellular dynamics, may have large knock-on effects on the nature of neural coding. Hitherto it has been thought that the neural code is optimised for energy minimisation, but this may be true only when neurons do not experience synaptic quiescence.

SeminarNeuroscience

Rational thoughts in neural codes

Xaq Pitkow
Baylor College of Medicine & Rice University
May 7, 2020

First, we describe a new method for inferring the mental model of an animal performing a natural task. We use probabilistic methods to compute the most likely mental model based on an animal’s sensory observations and actions. This also reveals dynamic beliefs that would be optimal according to the animal’s internal model, and thus provides a practical notion of “rational thoughts.” Second, we construct a neural coding framework by which these rational thoughts, their computational dynamics, and actions can be identified within the manifold of neural activity. We illustrate the value of this approach by training an artificial neural network to perform a generalization of a widely used foraging task. We analyze the network’s behaviour to find rational thoughts, and successfully recover the neural properties that implemented those thoughts, providing a way of interpreting the complex neural dynamics of the artificial brain. Joint work with Zhengwei Wu, Minhae Kwon, Saurabh Daptardar, and Paul Schrater.

ePoster

Neural code for episodic memories in a food-caching bird

Dmitriy Aronov

Bernstein Conference 2024

ePoster

Unstructured representations in a structured brain: a cross-region analysis of the neural code

Shuqi Wang, Lorenzo Posani, Liam Paninski, Stefano Fusi

Bernstein Conference 2024

ePoster

The neural code controls the geometry of probabilistic inference in early olfactory processing

COSYNE 2022

ePoster

The neural code controls the geometry of probabilistic inference in early olfactory processing

COSYNE 2022

ePoster

Inferring neural codes from natural behavior in fruit fly social communication

Rich Pang, Albert Lin, Christa Baker, William Bialek, Mala Murthy, Jonathan W. Pillow

COSYNE 2023

ePoster

A Large Dataset of Macaque V1 Responses to Natural Images Revealed Complexity in V1 Neural Codes

Shang Gao, Tianye Wang, Xie Jue, Daniel Wang, Tai Sing Lee, Shiming Tang

COSYNE 2023

ePoster

Learning accurate models of very large neural codes with shallow adaptive random projection networks

Jonathan Mayzel, Elad Schneidman

COSYNE 2025

ePoster

A shared neural code for flexible shifts in attention, motor actions, and goal setting? The role of theta and alpha oscillations for human flexibility

Jakob Kaiser, Simone Schütz-Bosbach

FENS Forum 2024

ePoster

What happens to the neural code of consonants when presented in background noise

Amarins Heeringa, Christine Köppl

FENS Forum 2024