TopicWorld Wide
Content Overview
109Total items
60Seminars
40ePosters
9Positions

Pick a domain context

This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.

Position

Dr. David Omer

Safra Center for Brain Science, The Hebrew university
Jerusalem, Israel
Apr 24, 2026

Our research program is aimed at elucidating the neural mechanisms of social cognition and navigation in the brain, using a combination of computational and experimental techniques. Positions are available for projects related to the neural basis of social cognition in the hippocampus, and the neural basis of spatial navigation in freely behaving animals. using wireless neural recordings techniques.

PositionComputational Neuroscience

Vinita Samarasinghe

Institute for Neural Computation, Ruhr University Bochum
Ruhr University Bochum, NB 3/73, Universitätstr. 150, 44801 Bochum
Apr 24, 2026

The research group uses diverse computational modeling approaches, including biological neural networks, cognitive modeling, and machine learning/artificial intelligence, to study learning and memory. The selected candidate will expand the computational modeling framework Cobel-RL and use it to study how episodic memory might be used to learn to navigate.

PositionComputational Neuroscience

Vinita Samarasinghe

Arbeitsgruppe Computational Neuroscience, Institut für Neuroinformatik, Ruhr-Universität Bochum
Ruhr University Bochum, NB 3/73, Postfachnummer 110, Universitätstr. 150, 44801 Bochum
Apr 24, 2026

The research group uses diverse computational modeling approaches, including biological neural networks, cognitive modeling, and machine learning/artificial intelligence, to study learning and memory. The group is actively seeking a talented graduate student to join the team, who will expand the computational modeling framework Cobel-RL (https://doi.org/10.3389/fninf.2023.1134405) and use it to study how episodic memory might be used to learn to navigate.

Position

Jens Peter Lindemann

Neurobiology Group of Bielefeld University
Bielefeld University
Apr 24, 2026

The PhD project is part of the DFG-funded project 'Cue integration by bumblebees during navigation in uncertain environments with multiple goal options: Behavioural analysis in virtual reality and computational modelling' in an international research team. Bumblebees can be trained to prefer certain places or objects in a virtual environment through appropriate rewarding. In a close integration of two PhD projects, one with a focus on VR behaviour experiments and the other focussing on computational modelling and simulation, we are investigating the mechanisms underlying these learning and orientation performances. The applicant is expected to design and implement models for behavioral control of bumblebees, test them in computer simulations, contribute to VR experiments with bumblebees, and collaborate intensively with other project participants.

Position

Alona Fyshe

Department of Psychology, University of Alberta, Alberta Machine Intelligence Institute (Amii)
Edmonton, University of Alberta
Apr 24, 2026

The Department of Psychology, University of Alberta, invites applications for a tenure-track position at the rank of Assistant Professor in Artificial Intelligence and Biological Cognition to commence with a start date as early as July 1, 2024. Exceptional candidates might be considered for hiring at the rank of Associate Professor. The position is part of a cluster hire in the intersection of AI/ML and other areas of research excellence within the University of Alberta that include Health, Energy, and Indigenous Initiatives in health and humanities, among others. The successful candidate will become an Amii Fellow, joining a highly collegial institute of world-class Artificial Intelligence and Machine Learning researchers, and will have access to Amii internal funding resources, administrative support, and a highly collaborative environment. The successful candidate will be nominated for a Canada CIFAR Artificial Intelligence (CCAI) Chair, by the Amii, which includes research funding for at least five years.

Position

CRAIG JIN

University of Sydney, University of Technology Sydney, Westmead Hospital
The University of Sydney, NSW, 2006
Apr 24, 2026

Postdoctoral Research Associate in Fluent Mobility for Visual Impairment Using Auditory Augmentation: You will be working with a dynamic group of researchers at the University of Sydney, University of Technology Sydney and Westmead Hospital. We are exploring the use of non-verbal auditory grammar for spatial cognition and navigation.

PositionComputational Neuroscience

Vinita Samarasinghe

Arbeitsgruppe Computational Neuroscience, Institut für Neuroinformatik, Ruhr-Universität Bochum
Ruhr-Universität Bochum, NB 3/73, Universitätstr. 150, 44801 Bochum
Apr 24, 2026

Doctoral Position in Computational Neuroscience. Are you curious about how the human brain stores memories? Have you wondered how we manage to navigate through space? Our dynamic research group uses diverse computational modeling approaches, including biological neural networks, cognitive modeling, and machine learning/artificial intelligence, to study learning and memory. Currently, we are actively seeking a talented graduate student to join our team, someone who will expand our computational modeling framework Cobel-Spike and use it to study how spiking neural networks can learn to navigate. This position is 65% at TV-L E13, starts as soon as possible, and is funded for 3 years.

Position

Prof. Thomas Wolbers

German Center for Neurodegenerative Diseases
Magdeburg, Germany
Apr 24, 2026

You will engage in a comprehensive investigation of the neural and cognitive processes underlying superior memory in aging. This research will involve: - Designing and implementing behavioral experiments using advanced virtual reality (VR) technologies - Conducting neuroimaging experiments using molecular imaging techniques and ultra-high field MRI (7T) - Applying computational models to analyze data and generate predictive insights This project is positioned at the intersection of aging research, advanced neuroimaging and computational neuroscience, allowing you to contribute to an area of high societal relevance. For more details, please visit https://jobs.dzne.de/en/jobs/101384/phd-fmx-position-on-memory-and-spatial-coding-in-superagers-406620249

Position

Chris Eliasmith

Computational Neuroscience Research Group (CNRG), Centre for Theoretical Neuroscience (CTN)
University of Waterloo
Apr 24, 2026

The postdoctoral position will be hosted in the CNRG, with a principal focus on neural modeling to build the next version of the Spaun brain model, the world’s largest functional brain model. The project integrates spiking deep neural networks, motor control, probabilistic inference, navigation, perception and cognition to develop a state-of-the-art, large-scale, spiking, whole-brain model. Applicants should have a PhD, with demonstrated skills in at least one of those areas and a willingness to learn about the others. This project leverages the CNRG’s existing expertise in using neural networks for large-scale brain modeling, originally demonstrated in 2012 with the first version of Spaun. A subsequent version in 2018 significantly extended performance. The latest version currently being built by the CNRG will again break new barriers in the scale and sophistication of whole brain models. Unlike past models, it will be embedded in a sophisticated 3D environment, yet retain the ability to perform a wide variety of tasks, from simple perceptual and motor tasks to challenging intelligence tests. Overall, the long-term goal of the project is to advance the state-of-the-art in large-scale brain models.

SeminarNeuroscience

Prefrontal-thalamic goal-state coding segregates navigation episodes into spatially consistent parallel hippocampal maps

Hiroshi Ito
University of Lausanne
Dec 1, 2025
SeminarNeuroscience

“Development and application of gaze control models for active perception”

Prof. Bert Shi
Professor of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST)
Jun 12, 2025

Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.

SeminarNeuroscienceRecording

Cognitive maps, navigational strategies, and the human brain

Russell Epstein
U Penn
May 13, 2025
SeminarNeuroscienceRecording

Altered grid-like coding in early blind people and the role of vision in conceptual navigation

Roberto Bottini
CIMeC, University of Trento
Mar 6, 2025
SeminarNeuroscience

Brain circuits for spatial navigation

Ann Hermundstad, Ila Fiete, Barbara Webb
Janelia Research Campus; MIT; University of Edinburgh
Nov 29, 2024

In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.

SeminarNeuroscience

Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus

Khamasi Mehdi
Centre National de la Recherche Scientifique / Sorbonne University
Nov 8, 2024
SeminarNeuroscience

Navigating semantic spaces: recycling the brain GPS for higher-level cognition

Manuela Piazza
University of Trento, Italy
May 28, 2024

Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.

SeminarNeuroscienceRecording

Human Echolocation for Localization and Navigation – Behaviour and Brain Mechanisms

Lore Thaler
Durham University
Feb 15, 2024
SeminarNeuroscience

A bottom up approach for analyzing circuits underlying navigation in vertebrates

Claire Wyart
Paris Brain Institute, France
Feb 1, 2024
SeminarNeuroscience

Modeling the Navigational Circuitry of the Fly

Larry Abbott
Columbia University
Dec 1, 2023

Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.

SeminarNeuroscience

A recurrent network model of planning predicts hippocampal replay and human behavior

Marcelo Mattar
NYU
Oct 20, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.

SeminarNeuroscienceRecording

How fly neurons compute the direction of visual motion

Axel Borst
Max-Planck-Institute for Biological Intelligence
Oct 9, 2023

Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

SeminarNeuroscience

A recurrent network model of planning explains hippocampal replay and human behavior

Guillaume Hennequin
University of Cambridge, UK
May 31, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 22, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscience

A specialized role for entorhinal attractor dynamics in combining path integration and landmarks during navigation

Malcolm Campbell
Harvard
Mar 9, 2023

During navigation, animals estimate their position using path integration and landmarks. In a series of two studies, we used virtual reality and electrophysiology to dissect how these inputs combine to generate the brain’s spatial representations. In the first study (Campbell et al., 2018), we focused on the medial entorhinal cortex (MEC) and its set of navigationally-relevant cell types, including grid cells, border cells, and speed cells. We discovered that attractor dynamics could explain an array of initially puzzling MEC responses to virtual reality manipulations. This theoretical framework successfully predicted both MEC grid cell responses to additional virtual reality manipulations, as well as mouse behavior in a virtual path integration task. In the second study (Campbell*, Attinger* et al., 2021), we asked whether these principles generalize to other navigationally-relevant brain regions. We used Neuropixels probes to record thousands of neurons from MEC, primary visual cortex (V1), and retrosplenial cortex (RSC). In contrast to the prevailing view that “everything is everywhere all at once,” we identified a unique population of MEC neurons, overlapping with grid cells, that became active with striking spatial periodicity while head-fixed mice ran on a treadmill in darkness. These neurons exhibited unique cue-integration properties compared to other MEC, V1, or RSC neurons: they remapped more readily in response to conflicts between path integration and landmarks; they coded position prospectively as opposed to retrospectively; they upweighted path integration relative to landmarks in conditions of low visual contrast; and as a population, they exhibited a lower-dimensional activity structure. Based on these results, our current view is that MEC attractor dynamics play a privileged role in resolving conflicts between path integration and landmarks during navigation. Future work should include carefully designed causal manipulations to rigorously test this idea, and expand the theoretical framework to incorporate notions of uncertainty and optimality.

SeminarNeuroscienceRecording

Minute-scale periodic sequences in medial entorhinal cortex

Soledad Gonzalo Cogno
Norwegian University of Science and Technology, Trondheim
Feb 1, 2023

The medial entorhinal cortex (MEC) hosts many of the brain’s circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience. While location is known to be encoded by a plethora of spatially tuned cell types in this brain region, little is known about how the activity of entorhinal cells is tied together over time. Among the brain’s most powerful mechanisms for neural coordination are network oscillations, which dynamically synchronize neural activity across circuit elements. In MEC, theta and gamma oscillations provide temporal structure to the neural population activity at subsecond time scales. It remains an open question, however, whether similarly coordination occurs in MEC at behavioural time scales, in the second-to-minute regime. In this talk I will show that MEC activity can be organized into a minute-scale oscillation that entrains nearly the entire cell population, with periods ranging from 10 to 100 seconds. Throughout this ultraslow oscillation, neural activity progresses in periodic and stereotyped sequences. The oscillation sometimes advances uninterruptedly for tens of minutes, transcending epochs of locomotion and immobility. Similar oscillatory sequences were not observed in neighboring parasubiculum or in visual cortex. The ultraslow periodic sequences in MEC may have the potential to couple its neurons and circuits across extended time scales and to serve as a scaffold for processes that unfold at behavioural time scales.

SeminarNeuroscienceRecording

The medial prefrontal cortex replays generalized sequences

Karola Käfer
Institute of Science and Technology Austria
Jan 11, 2023

Whilst spatial navigation is a function ascribed to the hippocampus, flexibly adapting to a change in rule depends on the medial prefrontal cortex (mPFC). Single-units were recorded from the hippocampus and mPFC of rats shifting between a spatially- and cue-guided rule on a plus-maze. The mPFC population coded for the relative position between start and goal arm. During awake immobility periods, the mPFC replayed organized sequences of generalized positions which positively correlated with rule-switching performance. Conversely, hippocampal replay negatively correlated with performance and occurred independently of mPFC replay. Sequential replay in the hippocampus and mPFC may thus serve different functions.

SeminarNeuroscienceRecording

Neural circuits for vector processing in the insect brain

Barbara Webb
University of Edinburgh
Nov 23, 2022

Several species of insects have been observed to perform accurate path integration, constantly updating a vector memory of their location relative to a starting position, which they can use to take a direct return path. Foraging insects such as bees and ants are also able to store and recall the vectors to return to food locations, and to take novel shortcuts between these locations. Other insects, such as dung beetles, are observed to integrate multimodal directional cues in a manner well described by vector addition. All these processes appear to be functions of the Central Complex, a highly conserved and strongly structured circuit in the insect brain. Modelling this circuit, at the single neuron level, suggests it has general capabilities for vector encoding, vector memory, vector addition and vector rotation that can support a wide range of directed and navigational behaviours.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 9, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscience

How fly neurons compute the direction of visual motion

Alexander Borst
Max Planck Institute of Neurobiology - Martinsried
Nov 7, 2022

Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

SeminarNeuroscienceRecording

Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation

Aran Nayebi
MIT
Nov 2, 2022

Studies of the mouse visual system have revealed a variety of visual brain areas in a roughly hierarchical arrangement, together with a multitude of behavioral capacities, ranging from stimulus-reward associations, to goal-directed navigation, and object-centric discriminations. However, an overall understanding of the mouse’s visual cortex organization, and how this organization supports visual behaviors, remains unknown. Here, we take a computational approach to help address these questions, providing a high-fidelity quantitative model of mouse visual cortex. By analyzing factors contributing to model fidelity, we identified key principles underlying the organization of mouse visual cortex. Structurally, we find that comparatively low-resolution and shallow structure were both important for model correctness. Functionally, we find that models trained with task-agnostic, unsupervised objective functions, based on the concept of contrastive embeddings were substantially better than models trained with supervised objectives. Finally, the unsupervised objective builds a general-purpose visual representation that enables the system to achieve better transfer on out-of-distribution visual, scene understanding and reward-based navigation tasks. Our results suggest that mouse visual cortex is a low-resolution, shallow network that makes best use of the mouse’s limited resources to create a light-weight, general-purpose visual system – in contrast to the deep, high-resolution, and more task-specific visual system of primates.

SeminarNeuroscienceRecording

Learning predictive maps in the brain for spatial navigation

William de Cothi
Barry lab, UCL
Oct 12, 2022

The predictive map hypothesis provides a promising framework to model representations in the hippocampal formation. I will introduce a tractable implementation of a predictive map called the successor representation (SR), before presenting data showing that rats and humans display SR-like navigational choices on a novel open-field maze. Next, I will show how such a predictive map could be implemented using spatial representations found in the hippocampal formation, before finally presenting how such learning might be well approximated by phenomena that exist in the spatial memory system - namely spike-timing dependent plasticity and theta phase precession.

SeminarNeuroscienceRecording

The Secret Bayesian Life of Ring Attractor Networks

Anna Kutschireiter
Spiden AG, Pfäffikon, Switzerland
Sep 7, 2022

Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.

SeminarNeuroscienceRecording

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis
California Institute of Technology
Sep 7, 2022

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

SeminarNeuroscience

Forming latent codes for decision-making and spatial navigation: a generative modeling perspective

Giovanni Pezzulo
National Research Council, Rome, Italy
May 12, 2022
SeminarNeuroscience

Cognitive experience alters cortical involvement in navigation decisions

Charlotte Arlt
Harvard
Apr 22, 2022

The neural correlates of decision-making have been investigated extensively, and recent work aims to identify under what conditions cortex is actually necessary for making accurate decisions. We discovered that mice with distinct cognitive experiences, beyond sensory and motor learning, use different cortical areas and neural activity patterns to solve the same task, revealing past learning as a critical determinant of whether cortex is necessary for decision tasks. We used optogenetics and calcium imaging to study the necessity and neural activity of multiple cortical areas in mice with different training histories. Posterior parietal cortex and retrosplenial cortex were mostly dispensable for accurate performance of a simple navigation-based visual discrimination task. In contrast, these areas were essential for the same simple task when mice were previously trained on complex tasks with delay periods or association switches. Multi-area calcium imaging showed that, in mice with complex-task experience, single-neuron activity had higher selectivity and neuron-neuron correlations were weaker, leading to codes with higher task information. Therefore, past experience is a key factor in determining whether cortical areas have a causal role in decision tasks.

SeminarNeuroscienceRecording

This is the way: Sensory guidance in foraging

Cindy Poo & Pauline Fleischmann
Champalimaud Center for the Unknown & University of Würzburg
Apr 19, 2022
SeminarNeuroscienceRecording

Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations

Yul Kang
Lengyel lab, Cambridge University
Apr 6, 2022

To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.

SeminarNeuroscienceRecording

Integrators in short- and long-term memory

Mark Goldman
UC Davis
Mar 2, 2022

The accumulation and storage of information in memory is a fundamental computation underlying animal behavior. In many brain regions and task paradigms, ranging from motor control to navigation to decision-making, such accumulation is accomplished through neural integrator circuits that enable external inputs to move a system’s population-wide patterns of neural activity along a continuous attractor. In the first portion of the talk, I will discuss our efforts to dissect the circuit mechanisms underlying a neural integrator from a rich array of anatomical, physiological, and perturbation experiments. In the second portion of the talk, I will show how the accumulation and storage of information in long-term memory may also be described by attractor dynamics, but now within the space of synaptic weights rather than neural activity. Altogether, this work suggests a conceptual unification of seemingly distinct short- and long-term memory processes.

SeminarNeuroscience

Effects of pathological Tau on hippocampal neuronal activity and spatial memory in ageing mice

Tim Viney
University of Oxford
Feb 11, 2022

The gradual accumulation of hyperphosphorylated forms of the Tau protein (pTau) in the human brain correlate with cognitive dysfunction and neurodegeneration. I will present our recent findings on the consequences of human pTau aggregation in the hippocampal formation of a mouse tauopathy model. We show that pTau preferentially accumulates in deep-layer pyramidal neurons, leading to their neurodegeneration. In aged but not younger mice, pTau spreads to oligodendrocytes. During ‘goal-directed’ navigation, we detect fewer high-firing pyramidal cells, but coupling to network oscillations is maintained in the remaining cells. The firing patterns of individually recorded and labelled pyramidal and GABAergic neurons are similar in transgenic and non-transgenic mice, as are network oscillations, suggesting intact neuronal coordination. This is consistent with a lack of pTau in subcortical brain areas that provide rhythmic input to the cortex. Spatial memory tests reveal a reduction in short-term familiarity of spatial cues but unimpaired spatial working and reference memory. These results suggest that preserved subcortical network mechanisms compensate for the widespread pTau aggregation in the hippocampal formation. I will also briefly discuss ideas on the subcortical origins of spatial memory and the concept of the cortex as a monitoring device.

SeminarNeuroscienceRecording

Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex

Alexander Attinger
Giocomo lab, Stanford University
Jan 12, 2022

During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.

SeminarNeuroscienceRecording

Deforming the metric of cognitive maps distorts memory

Jacob Bellmund
Doeller lab, MPI CBS and the Kavli Institute
Jan 12, 2022

Environmental boundaries anchor cognitive maps that support memory. However, trapezoidal boundary geometry distorts the regular firing patterns of entorhinal grid cells proposedly providing a metric for cognitive maps. Here, we test the impact of trapezoidal boundary geometry on human spatial memory using immersive virtual reality. Consistent with reduced regularity of grid patterns in rodents and a grid-cell model based on the eigenvectors of the successor representation, human positional memory was degraded in a trapezoid compared to a square environment; an effect particularly pronounced in the trapezoid’s narrow part. Congruent with spatial frequency changes of eigenvector grid patterns, distance estimates between remembered positions were persistently biased; revealing distorted memory maps that explained behavior better than the objective maps. Our findings demonstrate that environmental geometry affects human spatial memory similarly to rodent grid cell activity — thus strengthening the putative link between grid cells and behavior along with their cognitive functions beyond navigation.

SeminarNeuroscienceRecording

NMC4 Short Talk: Neural Representation: Bridging Neuroscience and Philosophy

Andrew Richmond (he/him)
Columbia University
Dec 2, 2021

We understand the brain in representational terms. E.g., we understand spatial navigation by appealing to the spatial properties that hippocampal cells represent, and the operations hippocampal circuits perform on those representations (Moser et al., 2008). Philosophers have been concerned with the nature of representation, and recently neuroscientists entered the debate, focusing specifically on neural representations. (Baker & Lansdell, n.d.; Egan, 2019; Piccinini & Shagrir, 2014; Poldrack, 2020; Shagrir, 2001). We want to know what representations are, how to discover them in the brain, and why they matter so much for our understanding of the brain. Those questions are framed in a traditional philosophical way: we start with explanations that use representational notions, and to more deeply understand those explanations we ask, what are representations — what is the definition of representation? What is it for some bit of neural activity to be a representation? I argue that there is an alternative, and much more fruitful, approach. Rather than asking what representations are, we should ask what the use of representational *notions* allows us to do in neuroscience — what thinking in representational terms helps scientists do or explain. I argue that this framing offers more fruitful ground for interdisciplinary collaboration by distinguishing the philosophical concerns that have a place in neuroscience from those that don’t (namely the definitional or metaphysical questions about representation). And I argue for a particular view of representational notions: they allow us to impose the structure of one domain onto another as a model of its causal structue. So, e.g., thinking about the hippocampus as representing spatial properties is a way of taking structures in those spatial properties, and projecting those structures (and algorithms that would implement them) them onto the brain as models of its causal structure.

SeminarNeuroscienceRecording

Targeted Activation of Hippocampal Place Cells Drives Memory-Guided Spatial Behaviour

Nick Robinson
Häusser lab, UCL
Dec 1, 2021

The hippocampus is crucial for spatial navigation and episodic memory formation. Hippocampal place cells exhibit spatially selective activity within an environment and have been proposed to form the neural basis of a cognitive map of space that supports these mnemonic functions. However, the direct influence of place cell activity on spatial navigation behaviour has not yet been demonstrated. Using an ‘all-optical’ combination of simultaneous two-photon calcium imaging and two-photon holographically targeted optogenetics, we identified and selectively activated place cells that encoded behaviourally relevant locations in a virtual reality environment. Targeted stimulation of a small number of place cells was sufficient to bias the behaviour of animals during a spatial memory task, providing causal evidence that hippocampal place cells actively support spatial navigation and memory. Time permitting, I will also describe new experiments aimed at understanding the fundamental encoding mechanism that supports episodic memory, focussing on the role of hippocampal sequences across multiple timescales and behaviours.

SeminarNeuroscienceRecording

NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation

Vivek Jayaraman
Janelia Research Campus
Dec 1, 2021

A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.

SeminarNeuroscience

Multiple maps for navigation

Lisa Giocomo
Stanford University, USA
Nov 22, 2021
SeminarNeuroscienceRecording

Phase precession in the human hippocampus and entorhinal cortex

Salman Qasim
Gu Lab, Icahn School of Medicine at Mount Sinai
Nov 17, 2021

Knowing where we are, where we have been, and where we are going is critical to many behaviors, including navigation and memory. One potential neuronal mechanism underlying this ability is phase precession, in which spatially tuned neurons represent sequences of positions by activating at progressively earlier phases of local network theta oscillations. Based on studies in rodents, researchers have hypothesized that phase precession may be a general neural pattern for representing sequential events for learning and memory. By recording human single-neuron activity during spatial navigation, we show that spatially tuned neurons in the human hippocampus and entorhinal cortex exhibit phase precession. Furthermore, beyond the neural representation of locations, we show evidence for phase precession related to specific goal states. Our find- ings thus extend theta phase precession to humans and suggest that this phenomenon has a broad func- tional role for the neural representation of both spatial and non-spatial information.

SeminarNeuroscience

Learning predictive maps in the brain for spatial navigation

Will de Cothi
UCL
Nov 17, 2021
SeminarNeuroscienceRecording

StereoSpike: Depth Learning with a Spiking Neural Network

Ulysse Rancon
University of Bordeaux
Nov 2, 2021

Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be implemented efficiently on neuromorphic chips, opening the door for low power real time embedded systems.

SeminarMachine LearningRecording

Playing StarCraft and saving the world using multi-agent reinforcement learning!

InstaDeep
Oct 29, 2021

This is my C-14 Impaler gauss rifle! There are many like it, but this one is mine!" - A terran marine If you have never heard of a terran marine before, then you have probably missed out on playing the very engaging and entertaining strategy computer game, StarCraft. However, don’t despair, because what we have in store might be even more exciting! In this interactive session, we will take you through, step-by-step, on how to train a team of terran marines to defeat a team of marines controlled by the built-in game AI in StarCraft II. How will we achieve this? Using multi-agent reinforcement learning (MARL). MARL is a useful framework for building distributed intelligent systems. In MARL, multiple agents are trained to act as individual decision-makers of some larger system, while learning to work as a team. We will show you how to use Mava (https://github.com/instadeepai/Mava), a newly released research framework for MARL to build a multi-agent learning system for StarCraft II. We will provide the necessary guidance, tools and background to understand the key concepts behind MARL, how to use Mava building blocks to build systems and how to train a system from scratch. We will conclude the session by briefly sharing various exciting real-world application areas for MARL at InstaDeep, such as large-scale autonomous train navigation and circuit board routing. These are problems that become exponentially more difficult to solve as they scale. Finally, we will argue that many of humanity’s most important practical problems are reminiscent of the ones just described. These include, for example, the need for sustainable management of distributed resources under the pressures of climate change, or efficient inventory control and supply routing in critical distribution networks, or robotic teams for rescue missions and exploration. We believe MARL has enormous potential to be applied in these areas and we hope to inspire you to get excited and interested in MARL and perhaps one day contribute to the field!

SeminarNeuroscience

“Wasn’t there food around here?”: An Agent-based Model for Local Search in Drosophila

Amir Behbahani
California Institute of Technology
Sep 20, 2021

The ability to keep track of one’s location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies’ decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.

SeminarNeuroscienceRecording

Multisensory self in spatial navigation

Olaf Blanke
Swiss Federal Institute of Technology (EPFL)
Sep 2, 2021
SeminarNeuroscience

Neural circuits that support robust and flexible navigation in dynamic naturalistic environments

Hannah Haberkern
HHMI Janelia Research Campus
Aug 16, 2021

Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.

SeminarNeuroscience

Using extra-hippocampal cognitive maps for goal-directed spatial navigation

Hiroshi Ito
Max Planck Institute for Brain Research
Jul 7, 2021

Goal-directed navigation requires precise estimates of spatial relationships between current position and future goal, as well as planning of an associated route or action. While neurons in the hippocampal formation can represent the animal’s position and nearby trajectories, their role in determining the animal’s destination or action has been questioned. We thus hypothesize that brain regions outside the hippocampal formation may play complementary roles in navigation, particularly for guiding goal-directed behaviours based on the brain’s internal cognitive map. In this seminar, I will first describe a subpopulation of neurons in the retrosplenial cortex (RSC) that increase their firing when the animal approaches environmental boundaries, such as walls or edges. This boundary coding is independent of direct visual or tactile sensation but instead depends on inputs from the medial entorhinal cortex (MEC) that contains spatial tuning cells, such as grid cells or border cells. However, unlike MEC border cells, we found that RSC border cells encode environmental boundaries in a self-centred egocentric coordinate frame, which may allow an animal for efficient avoidance from approaching walls or edges during navigation. I will then discuss whether the brain can possess a precise estimate of remote target location during active environmental exploration. Such a spatial code has not been described in the hippocampal formation. However, we found that neurons in the rat orbitofrontal cortex (OFC) form spatial representations that persistently point to the animal’s subsequent goal destination throughout navigation. This destination coding emerges before navigation onset without direct sensory access to a distal goal, and are maintained via destination-specific neural ensemble dynamics. These findings together suggest key roles for extra-hippocampal regions in spatial navigation, enabling animals to choose appropriate actions toward a desired destination by avoiding possible dangers.

SeminarNeuroscience

Multisensory encoding of self-motion in the retrosplenial cortex and beyond

Sepiedeh Keshavarzi
Sainsbury Wellcome Centre, UCL
Jun 30, 2021

In order to successfully navigate through the environment, animals must accurately estimate the status of their motion with respect to the surrounding scene and objects. In this talk, I will present our recent work on how retrosplenial cortical (RSC) neurons combine vestibular and visual signals to reliably encode the direction and speed of head turns during passive motion and active navigation. I will discuss these data in the context of RSC long-range connectivity and further show our ongoing work on building population-level models of motion representation across cortical and subcortical networks.

SeminarNeuroscienceRecording

Natural switches in sensory attention rapidly modulate hippocampal spatial codes

Ayelet Sarel
Ulanovsky lab, Weizmann Institute of Science
Jun 2, 2021

During natural behavior animals dynamically switch between different behaviors, yet little is known about how the brain performs behavioral-switches. Navigation is a complex dynamic behavior that enables testing these kind of behavioral switches: It requires the animal to know its own allocentric (world-centered) location within the environment, while also paying attention to incoming sudden events such as obstacles or other conspecifics – and therefore the animal may need to rapidly switch from representing its own allocentric position to egocentrically representing ‘things out-there’. Here we used an ethological task where two bats flew together in a very large environment (130 meters), and had to switch between two behaviors: (i) navigation, and (ii) obstacle-avoidance during ‘cross-over’ events with the other bat. Bats increased their echolocation click-rate before a cross-over, indicating spatial attention to the other bat. Hippocampal CA1 neurons represented the bat’s own position when flying alone (allocentric place-coding); surprisingly, when meeting the other bat, neurons switched very rapidly to jointly representing the inter-bat distance × position (egocentric × allocentric coding). This switching to a neuronal representation of the other bat was correlated on a trial-by-trial basis with the attention signal, as indexed by the bat’s echolocation calls – suggesting that sensory attention is controlling these major switches in neural coding. Interestingly, we found that in place-cells, the different place-fields of the same neuron could exhibit very different tuning to inter-bat distance – creating a non-separable coding of allocentric position × egocentric distance. Together, our results suggest that attentional switches during navigation – which in bats can be measured directly based on their echolocation signals – elicit rapid dynamics of hippocampal spatial coding. More broadly, this study demonstrates that during natural behavior, when animals often switch between different behaviors, neural circuits can rapidly and flexibly switch their core computations.

SeminarNeuroscience

Neural mechanisms of navigation behavior

Rachel Wilson
Joseph B. Martin Professor of Basic Research in the Field of Neurobiology, Harvard Medical School. Investigator, Howard Hughes Medical Institute.
May 26, 2021

The regions of the insect brain devoted to spatial navigation are beautifully orderly, with a remarkably precise pattern of synaptic connections. Thus, we can learn much about the neural mechanisms of spatial navigation by targeting identifiable neurons in these networks for in vivo patch clamp recording and calcium imaging. Our lab has recently discovered that the "compass system" in the Drosophila brain is anchored to not only visual landmarks, but also the prevailing wind direction. Moreover, we found that the compass system can re-learn the relationship between these external sensory cues and internal self-motion cues, via rapid associative synaptic plasticity. Postsynaptic to compass neurons, we found neurons that conjunctively encode heading direction and body-centric translational velocity. We then showed how this representation of travel velocity is transformed from body- to world-centric coordinates at the subsequent layer of the network, two synapses downstream from compass neurons. By integrating this world-centric vector-velocity representation over time, it should be possible for the brain to form a stored representation of the body's path through the environment.

SeminarNeuroscience

The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium

Prof Mandyam V. Srinivasan, Dr Stefan Leutenegger, Dr Basil el Jundi, Dr Einat Couzin-Fuchs, Dr Josh Merel, Dr Huai-Ti Lin
May 26, 2021

Join the Department of Bioengineering on the 26th May at 9:00am for The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium. This year’s lecture speaker will be distinguished bioengineer and neuroscientist Professor Mandyam V. Srinivasan AM FRS, from the University of Queensland. Professor Srinivasan studies visual systems, particularly those of bees and birds. His research has revealed how flying insects negotiate narrow gaps, regulate the height and speed of flight, estimate distance flown, and orchestrate smooth landings. Apart from enhancing fundamental knowledge, these findings are leading to novel, biologically inspired approaches to the design of guidance systems for unmanned aerial vehicles with applications in the areas of surveillance, security and planetary exploration. Following Professor Srinivasan’s lecture will be the Bioinspired GNC Mini Symposium with guest speakers from Google Deepmind, Imperial College London, the University of Würzburg and the University of Konstanz giving talks on their research into autonomous robot navigation, neural mechanisms of compass orientation in insects and computational approaches to motor control.

SeminarNeuroscienceRecording

On places and borders in the brain

Dori Derdikman
Technion
May 20, 2021

While various forms of cells have been found in relation to the hippocampus cognitive map and navigation system, how these cells are formed and what is read from them is still a mystery. In the current lecture I will talk about several projects which tackle these issues. First, I will show how the formation of border cells in the coginitive map is related to a coordinate transformation, second I will discuss the interaction between the reward system (VTA) and the hippocampus. Finally I will describe a project using place cells as a proxy for associative memory for assessing deficits in Alzheimer’s disease.

SeminarNeuroscienceRecording

On cognitive maps and reinforcement learning in large-scale animal behaviour

Yossi Yovel
Tel Aviv University
May 13, 2021

Bats are extreme aviators and amazing navigators. Many bat species nightly commute dozens of kilometres in search of food, and some bat species annually migrate over thousands of kilometres. Studying bats in their natural environment has always been extremely challenging because of their small size (mostly <50 gr) and agile nature. We have recently developed novel miniature technology allowing us to GPS-tag small bats, thus opening a new window to document their behaviour in the wild. We have used this technology to track fruit-bats pups over 5 months from birth to adulthood. Following the bats’ full movement history allowed us to show that they use novel short-cuts which are typical for cognitive-map based navigation. In a second study, we examined how nectar-feeding bats make foraging decisions under competition. We show that by relying on a simple reinforcement learning strategy, the bats can divide the resource between them without aggression or communication. Together, these results demonstrate the power of the large scale natural approach for studying animal behavior.

SeminarNeuroscience

Locally-ordered representation of 3D space in the entorhinal cortex

Gily Ginosar
Ulanovsky lab, Weizmann Institute, Rehovot, Israel
Apr 29, 2021

When animals navigate on a two-dimensional (2D) surface, many neurons in the medial entorhinal cortex (MEC) are activated as the animal passes through multiple locations (‘firing fields’) arranged in a hexagonal lattice that tiles the locomotion-surface; these neurons are known as grid cells. However, although our world is three-dimensional (3D), the 3D volumetric representation in MEC remains unknown. Here we recorded MEC cells in freely-flying bats and found several classes of spatial neurons, including 3D border cells, 3D head-direction cells, and neurons with multiple 3D firing-fields. Many of these multifield neurons were 3D grid cells, whose neighboring fields were separated by a characteristic distance – forming a local order – but these cells lacked any global lattice arrangement of their fields. Thus, while 2D grid cells form a global lattice – characterized by both local and global order – 3D grid cells exhibited only local order, thus creating a locally ordered metric for space. We modeled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D – thus describing both 2D and 3D grid cells using one unifying model. Together, these data and model illuminate the fundamental differences and similarities between neural codes for 3D and 2D space in the mammalian brain.

SeminarNeuroscienceRecording

Australian Bogong moths use a true stellar compass for long-distance navigation at night

Eric Warrant
University of Lund
Apr 19, 2021

Each spring, billions of Bogong moths escape hot conditions in different regions of southeast Australia by migrating over 1000 km to a limited number of cool caves in the Australian Alps, historically used for aestivating over the summer. At the beginning of autumn the same individuals make a return migration to their breeding grounds to reproduce and die. To steer migration Bogong moths sense the Earth’s magnetic field and correlate its directional information with visual cues. In this presentation, we will show that a critically important visual cue is the distribution of starlight within the austral night sky. By tethering spring and autumn migratory moths in a flight simulator, we found that under natural dorsally-projected night skies, and in a nulled magnetic field (disabling the magnetic sense), moths flew in their seasonally appropriate migratory directions, turning in the opposite direction when the night sky was rotated 180°. Visual interneurons in the moth’s optic lobe and central brain responded vigorously to identical sky rotations. Migrating Bogong moths thus use the starry night sky as a true compass to distinguish geographic cardinal directions, the first invertebrate known to do so. These stellar cues are likely reinforced by the Earth’s magnetic field to create a robust compass mechanism for long-distance nocturnal navigation.

SeminarNeuroscience

State-dependent egocentric and allocentric heading representation in the monarch butterfly sun compass

Basil El Jundi
University of Wuerzburg
Mar 31, 2021

For spatial orientation, heading information can be processed in two different frames of reference, a self-centered egocentric or a viewpoint allocentric frame of reference. Using the most efficient frame of reference is in particular important if an animal migrates over large distances, as it the case for the monarch butterfly (Danaus plexippus). These butterflies employ a sun compass to travel over more than 4,000 kilometers to their destination in central Mexico. We developed tetrode recordings from the heading-direction network of tethered flying monarch butterflies that were allowed to orient with respect to a sun stimulus. We show that the neurons switch their frame of reference depending on the animal’s locomotion state. In quiescence, the heading-direction cells encode a sun bearing in an egocentric reference frame, while during active flight, the heading-direction is encoded within an allocentric reference frame. By switching to an allocentric frame of reference during flight, monarch butterflies convert the sun to a global compass cue for long-distance navigation, an ideal strategy for maintaining a migratory heading.

SeminarNeuroscienceRecording

Mixed representations in a visual-parietal-retrosplenial network for flexible navigation decisions

Shinichiro Kira
Harvard Medical School
Mar 31, 2021
SeminarNeuroscience

Navigation Turing Test: Toward Human-like RL

Ida Momennejad
Microsoft Research NYC
Mar 26, 2021

tbc

SeminarNeuroscienceRecording

A distinct subcircuit in medial entorhinal cortex mediates learning of interval timing behavior during immobility

Jim Heys
University of Utah, USA
Mar 23, 2021

Over 60 years of research has established that medial temporal lobe structures, including the hippocampus and entorhinal cortex, are necessary for the formation of episodic memories (i.e. memories of specific personal events that occur in spatial and temporal context). While prior work to establish the neural mechanisms underlying episodic memory has largely focused on questions related spatial context, recently we have begun to investigate how these brain structures could be involved in encoding aspects of temporal context. In particular, we have focused on how medial entorhinal cortex, a structure well known for its role in spatial memory, may also be involved in encoding interval time. To answer this question we have developed an instrumental paradigm for head-fixed mice that requires both immobile interval timing and locomotion-dependent navigation behavior. By combining this behavioral paradigm with large-scale cellular resolution functional imaging and optogenetic-mediated inactivation, our results suggest that MEC is required for learning of interval timing behavior and that interval timing could be mediated through regular, sequential neural activity of a distinct subpopulation of neurons in MEC that encode elapsed time during periods of immobility (Heys and Dombeck, 2018; Heys et al, 2020; Issa et al., 2020). In this talk, I will discuss these findings and discuss our on-going work to investigate the principles underlying the role of medial temporal lobe structures in timing behavior and episodic memory.

SeminarNeuroscience

Abstraction and Inference in the Prefrontal Hippocampal Circuitry

Tim Behrens
Oxford University
Mar 18, 2021

The cellular representations and computations that allow rodents to navigate in space have been described with beautiful precision. In this talk, I will show that some of these same computations can be found in humans doing tasks that appear very different from spatial navigation. I will describe some theory that allows us to think about spatial and non-spatial problems in the same framework, and I will try to use this theory to give a new perspective on the beautiful spatial computations that inspired it. The overall goal of this work is to find a framework where we can talk about complicated non-spatial inference problems with the same precision that is only currently available in space.

SeminarNeuroscienceRecording

Understanding sensorimotor control at global and local scales

Kelly Clancy
Mrsic-Flogel lab, Sainsbury Wellcome Centre
Mar 10, 2021

The brain is remarkably flexible, and appears to instantly reconfigure its processing depending on what’s needed to solve a task at hand: fMRI studies indicate that distal brain areas appear to fluidly couple and decouple with one another depending on behavioral context. But the structural architecture of the brain is comprised of long-range axonal projections that are relatively fixed by adulthood. How does the global dynamism evident in fMRI recordings manifest at a cellular level? To bridge the gap between the activity of single neurons and cortex-wide networks, we correlated electrophysiological recordings of individual neurons in primary visual (V1) and retrosplenial (RSP) associational cortex with activity across dorsal cortex, recorded simultaneously using widefield calcium imaging. We found that individual neurons in both cortical areas independently engaged in different distributed cortical networks depending on the animal’s behavioral state, suggesting that locomotion puts cortex into a more sensory driven mode relevant for navigation.

SeminarPhysics of LifeRecording

Sperm Navigation: from hydrodynamic interactions to parameter estimation

Sarah Olson
Worcester Polytechnic Institute
Mar 3, 2021

Microorganisms can swim in a variety of environments, interacting with chemicals and other proteins in the fluid. In this talk, we will highlight recent computational methods and results for swimming efficiency and hydrodynamic interactions of swimmers in different fluid environments. Sperm are modeled via a centerline representation where forces are solved for using elastic rod theory. The method of regularized Stokeslets is used to solve the fluid-structure interaction where emergent swimming speeds can be compared to asymptotic analysis. In the case of fluids with extra proteins or cells that may act as friction, swimming speeds may be enhanced, and attraction may not occur. We will also highlight how parameter estimation techniques can be utilized to infer fluid and/or swimmer properties.

SeminarNeuroscienceRecording

Cortical networks for flexible decisions during spatial navigation

Christopher Harvey
Harvard University
Feb 19, 2021

My lab seeks to understand how the mammalian brain performs the computations that underlie cognitive functions, including decision-making, short-term memory, and spatial navigation, at the level of the building blocks of the nervous system, cell types and neural populations organized into circuits. We have developed methods to measure, manipulate, and analyze neural circuits across various spatial and temporal scales, including technology for virtual reality, optical imaging, optogenetics, intracellular electrophysiology, molecular sensors, and computational modeling. I will present recent work that uses large scale calcium imaging to reveal the functional organization of the mouse posterior cortex for flexible decision-making during spatial navigation in virtual reality. I will also discuss work that uses optogenetics and calcium imaging during a variety of decision-making tasks to highlight how cognitive experience and context greatly alter the cortical circuits necessary for navigation decisions.

ePoster

Behavioral and Neuronal Correlates of Exploration and Goal-Directed Navigation

Miao Wang, Fabian Stocek, Joseph González, Justin Graboski, Adrian Duszkiewicz, Adrien Peyrache, Anton Sirota

Bernstein Conference 2024

ePoster

Optimizing Trajectories via Replay in a Closed-Loop Spiking Neuronal Network Model of Navigation

Masud Ehsani, Sen Cheng

Bernstein Conference 2024

ePoster

Spatial navigation under uncertainty

Jan Drugowitsch

Bernstein Conference 2024

ePoster

The role of feedback in dynamic inference for spatial navigation under uncertainty

Albert Chen, Jan Drugowitsch

Bernstein Conference 2024

ePoster

Accurate Engagement of the Drosophila Central-Complex Compass During Head-Fixed Path-Constrained Navigation

Hessameddin Akhlaghpour,Jazz Weisman,Gaby Maimon

COSYNE 2022

ePoster

Correlation-based motion detectors in olfaction enable turbulent plume navigation

Nirag Kadakia,Damon Clark,Thierry Emonet

COSYNE 2022

ePoster

VTA dopamine neurons signal phasic and ramping reward prediction error in goal-directed navigation

Karolina Farrell,Aman Saleem,Armin Lak

COSYNE 2022

ePoster

Hippocampal representations emerge when training recurrent neural networks on a memory dependent maze navigation task

Justin Jude,Matthias Hennig

COSYNE 2022

ePoster

Hippocampal representations emerge when training recurrent neural networks on a memory dependent maze navigation task

Justin Jude,Matthias Hennig

COSYNE 2022

ePoster

Model architectures for choice-selective sequences in a navigation-based, evidence-accumulation task

Lindsey Brown,Jounhong Ryan Cho,Scott S. Bolkan,Edward H. Nieh,Manuel Schottdorf,Sue Ann Koay,David W. Tank,Carlos D. Brody,Ilana B. Witten,Mark S. Goldman

COSYNE 2022

ePoster

Model architectures for choice-selective sequences in a navigation-based, evidence-accumulation task

Lindsey Brown,Jounhong Ryan Cho,Scott S. Bolkan,Edward H. Nieh,Manuel Schottdorf,Sue Ann Koay,David W. Tank,Carlos D. Brody,Ilana B. Witten,Mark S. Goldman

COSYNE 2022

ePoster

A neural circuit model of hidden state inference for navigation and contextual memory

Isabel Low,Scott Linderman,Lisa Giocomo,Alex Williams

COSYNE 2022

ePoster

A neural circuit model of hidden state inference for navigation and contextual memory

Isabel Low,Scott Linderman,Lisa Giocomo,Alex Williams

COSYNE 2022

ePoster

Task demands drive choice of navigation strategy and distinct types of spatial representations

Sandhiya Vijayabaskaran,Sen Cheng

COSYNE 2022

ePoster

Task demands drive choice of navigation strategy and distinct types of spatial representations

Sandhiya Vijayabaskaran,Sen Cheng

COSYNE 2022

ePoster

Using navigational information to learn visual representations

Lizhen Zhu,Brad Wyble,James Wang

COSYNE 2022

ePoster

Using navigational information to learn visual representations

Lizhen Zhu,Brad Wyble,James Wang

COSYNE 2022

ePoster

Beyond task-optimized neural models: constraints from eye movements during navigation

Akis Stavropoulos, Kaushik Lakshminarasimhan, Dora Angelaki

COSYNE 2023

ePoster

Direct cortical inputs to hippocampal area CA1 transmit complementary signals for goal-directed navigation

John Bowler & Attila Losonczy

COSYNE 2023

ePoster

Does the fly’s brain center for vector navigation know that the world is 3D?

Angel Stanoev, Hannah Haberkern, Brad Hulse, Sandro Romani, Vivek Jayaraman

COSYNE 2023

ePoster

Language emergence in reinforcement learning agents performing navigational tasks

Tobias Wieczorek, Maximilian Eggl, Tatjana Tchumatchenko, Carlos Wert Carvajal

COSYNE 2023

ePoster

Learning parsimonious dynamics for state abstraction and navigation

Tankred Saanum & Eric Schulz

COSYNE 2023

ePoster

Modeling the orbitofrontal cortex function in navigation through an RL-RNN implementation

Carlos Wert Carvajal, Raunak Basu, Albert Miguel-Lopez, Hiroshi Ito, Tatjana Tchumatchenko

COSYNE 2023

ePoster

Optimal control under uncertainty predicts variability in human navigation behavior

Fabian Kessler, Julia Frankenstein, Constantin Rothkopf

COSYNE 2023

ePoster

The role of the entorhinal cortex in reward-guided spatial navigation

John Issa, Brad Radvansky, Feng Xuan, Daniel Dombeck

COSYNE 2023

ePoster

State-dependent navigation strategies in C. elegans vary with olfactory learning

Kevin S. Chen, Jonathan W. Pillow, Andrew Leifer

COSYNE 2023

ePoster

Vector production via mental navigation in the entorhinal cortex

Sujaya Neupane, Ila Fiete, Mehrdad Jazayeri

COSYNE 2023

ePoster

Composition of neural dynamics underlies distinct policy for navigation

Sangkyu Son, Benjamin Hayden, Maya Wang, Seng Bum Michael Yoo

COSYNE 2025

ePoster

Coordinating control and planning for navigation on simplicial complex attractors

Brabeeba Wang, Nancy Lynch, Michael Halassa

COSYNE 2025

ePoster

Efficient navigation is achieved through state-dependent strategies in C. elegans

Kevin Chen, Leifer Andrew, Jonathan Pillow

COSYNE 2025

ePoster

Flow Tree: A dynamical classifier for quantifying navigation paths and strategies

Abby Bertics, Elizabeth Chrastil, Nina Miolane, Jean Carlson

COSYNE 2025

ePoster

ForageWorld: RL agents in complex foraging arenas develop internal maps for navigation and planning

Ryan Badman, Riley Simmons-Edler, Felix Berg, Joshua Lunger, John Vastola, William Qian, Kanaka Rajan

COSYNE 2025

ePoster

Inhibition-stabilized disordered dynamics in mouse cortex during navigational decision-making

Siyan Zhou, Ryan Badman, Charlotte Arlt, Kanaka Rajan, Christopher Harvey

COSYNE 2025

ePoster

Novel Maze Paradigm for Assessing Behavioral \& Neural States Underlying Goal-Directed Navigation

Shreya Bangera, Patrick Honma, Reuben Thomas, Dan Xia, Jorge Palop

COSYNE 2025

ePoster

Walking fruit flies use directional memory in olfactory navigation

Minni Sun, Andrew Siliciano, Chad Morton, Larry Abbott, Vanessa Ruta

COSYNE 2025

ePoster

Activity dynamics of hippocampal CA1 pyramidal neurons during virtual navigation in mice

Kata M. Szamosfalvi, Snezana Raus Balind, Rita Nyilas, Balázs Lükő, Balázs B. Ujfalussy, Judit K. Makara
ePoster

Astrocyte-neuron communication in the mouse hippocampus during virtual navigation

Sara Romanzi, Pedro D. Lagomarsino, Sebastiano Curreli, Jacopo Bonato, Stefano Panzeri, Tommaso Fellin
ePoster

Calcium fluctuations representations of environment, navigation and exploration in mice PFC

Melisa Maidana Capitan, Alejandra Alonso, Evelien H. Schut, Ronny Eichler, Lisa Genzel, Francesco P. Battaglia
ePoster

A Calcium Imaging Based Brain-Machine Interface for Virtual Navigation

Ethan T. Sorrell, Daniel E. Wilson, M. E. Rule, Helen Yang, Fulvio Forni, Christopher D. Harvey, Timothy O'Leary
ePoster

How Do Bees See the World? A (Normative) Deep Reinforcement Learning Model for Insect Navigation

Stephan Lochner, Andrew Straw

Bernstein Conference 2024

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.