Location
location
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Structural & Functional Neuroplasticity in Children with Hemiplegia
About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Rethinking Attention: Dynamic Prioritization
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.
Visuomotor learning of location, action, and prediction
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Human Echolocation for Localization and Navigation – Behaviour and Brain Mechanisms
Wildlife, Warriors and Women: Large Carnivore Conservation in Tanzania and Beyond
Professor Amy Dickman established is the joint CEO of Lion Landscapes, which works to help conserve wildlife in some of the most important biodiversity areas of Africa. These areas include some of the most important areas in the world for big cats, but also have an extremely high level of lion killing, as lions and other carnivores impose high costs on poverty-stricken local people. Amy and her team are working with local communities to reduce carnivore attacks, providing villagers with real benefits from carnivore presence, engaging warriors in conservation and training the next generation of local conservation leaders. It has been a challenging endeavour, given the remote location and secretive and hostile nature of the tribe responsible for most lion-killing. In her talk, Amy will discuss the significance of this project, the difficulties of working in an area where witchcraft and mythology abound, and the conservation successes that are already emerging from this important work.
Location, time and type of epileptic activity influence how sleep modulates epilepsy
Sleep and epilepsy are tightly interconnected: On the one hand disturbed sleep is known to negatively affect epilepsy, whereas on the other hand epilepsy negatively impacts sleep. In this talk, we leverage on the unique opportunity provided by simultaneous stereo-EEG and sleep recordings to disentangle these relationships. We will discuss latest evidence on if anatomy (temporal vs. extratemporal), time (early vs. late sleep), and type of epileptic activity (ictal vs. interictal) influence how epileptic activity is modulated by sleep. After this talk, attendees will have a more nuanced understanding of the contributions of location, time and type of epileptic activity in the relationship between sleep and epilepsy.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing
The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.
The development of visual experience
Vision and visual cognition is experience-dependent with likely multiple sensitive periods, but we know very little about statistics of visual experience at the scale of everyday life and how they might change with development. By traditional assumptions, the world at the massive scale of daily life presents pretty much the same visual statistics to all perceivers. I will present an overview our work on ego-centric vision showing that this is not the case. The momentary image received at the eye is spatially selective, dependent on the location, posture and behavior of the perceiver. If a perceiver’s location, possible postures and/or preferences for looking at some kinds of scenes over others are constrained, then their sampling of images from the world and thus the visual statistics at the scale of daily life could be biased. I will present evidence with respect to both low-level and higher level visual statistics about the developmental changes in the visual input over the first 18 months post-birth.
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Cognition in the Wild
What do nonhuman primates know about each other and their social environment, how do they allocate their attention, and what are the functional consequences of social decisions in natural settings? Addressing these questions is crucial to hone in on the co-evolution of cognition, social behaviour and communication, and ultimately the evolution of intelligence in the primate order. I will present results from field experimental and observational studies on free-ranging baboons, which tap into the cognitive abilities of these animals. Baboons are particularly valuable in this context as different species reveal substantial variation in social organization and degree of despotism. Field experiments revealed considerable variation in the allocation of social attention: while the competitive chacma baboons were highly sensitive to deviations from the social order, the highly tolerant Guinea baboons revealed a confirmation bias. This bias may be a result of the high gregariousness of the species, which puts a premium on ignoring social noise. Variation in despotism clearly impacted the use of signals to regulate social interactions. For instance, male-male interactions in chacma baboons mostly comprised dominance displays, while Guinea baboon males evolved elaborate greeting rituals that serve to confirm group membership and test social bonds. Strikingly, the structure of signal repertoires does not differ substantially between different baboon species. In conclusion, the motivational disposition to engage in affiliation or aggressiveness appears to be more malleable during evolution than structural elements of the behavioral repertoire; this insight is crucial for understanding the dynamics of social evolution.
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
Motion processing across visual field locations in zebrafish
REM sleep and the energy allocation hypothesis”
Minute-scale periodic sequences in medial entorhinal cortex
The medial entorhinal cortex (MEC) hosts many of the brain’s circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience. While location is known to be encoded by a plethora of spatially tuned cell types in this brain region, little is known about how the activity of entorhinal cells is tied together over time. Among the brain’s most powerful mechanisms for neural coordination are network oscillations, which dynamically synchronize neural activity across circuit elements. In MEC, theta and gamma oscillations provide temporal structure to the neural population activity at subsecond time scales. It remains an open question, however, whether similarly coordination occurs in MEC at behavioural time scales, in the second-to-minute regime. In this talk I will show that MEC activity can be organized into a minute-scale oscillation that entrains nearly the entire cell population, with periods ranging from 10 to 100 seconds. Throughout this ultraslow oscillation, neural activity progresses in periodic and stereotyped sequences. The oscillation sometimes advances uninterruptedly for tens of minutes, transcending epochs of locomotion and immobility. Similar oscillatory sequences were not observed in neighboring parasubiculum or in visual cortex. The ultraslow periodic sequences in MEC may have the potential to couple its neurons and circuits across extended time scales and to serve as a scaffold for processes that unfold at behavioural time scales.
Neural circuits for vector processing in the insect brain
Several species of insects have been observed to perform accurate path integration, constantly updating a vector memory of their location relative to a starting position, which they can use to take a direct return path. Foraging insects such as bees and ants are also able to store and recall the vectors to return to food locations, and to take novel shortcuts between these locations. Other insects, such as dung beetles, are observed to integrate multimodal directional cues in a manner well described by vector addition. All these processes appear to be functions of the Central Complex, a highly conserved and strongly structured circuit in the insect brain. Modelling this circuit, at the single neuron level, suggests it has general capabilities for vector encoding, vector memory, vector addition and vector rotation that can support a wide range of directed and navigational behaviours.
Driving human visual cortex, visually and electrically
The development of circuit-based therapeutics to treat neurological and neuropsychiatric diseases require detailed localization and understanding of electrophysiological signals in the human brain. Electrodes can record and stimulate circuits in many ways, and we often rely on non-invasive imaging methods to predict the location to implant electrodes. However, electrophysiological and imaging signals measure the underlying tissue in a fundamentally different manner. To integrate multimodal data and benefit from these complementary measurements, I will describe an approach that considers how different measurements integrate signals across the underlying tissue. I will show how this approach helps relate fMRI and intracranial EEG measurements and provides new insights into how electrical stimulation influences human brain networks.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
What shapes the transcriptional identity of a neuron?
Within the vertebrate neocortex and other telencephalic structures, molecularly-defined neurons tend to segregate at first order into GABAergic types and glutamatergic types. Two fundamental questions arise: (1) do non-telencephalic neurons similarly segregate by neurotransmitter status, and (2) do GABAergic (or glutamatergic) types sampled in different structures share many molecular features in common, beyond the few genes directly responsible for neurotransmitter synthesis and release? To address these questions, we used single-nucleus RNA sequencing, analyzing over 2.4 million brain cells sampled from 16 locations in a primate (the common marmoset). Unexpectedly, we find the answer to both is “no”. I will discuss implications for generalizing associations between neurotransmitter utilization and other phenotypes, and share ongoing efforts to map the biodistributions of cell types in the primate brain.
Hierarchical transformation of visual event timing representations in the human brain: response dynamics in early visual cortex and timing-tuned responses in association cortices
Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. For example, this allows us to follow and interact with the precise timing of speech and sports. Here we investigate how visual event timing is represented and transformed across the brain’s hierarchy: from sensory processing areas, through multisensory integration areas, to frontal action planning areas. We hypothesized that the dynamics of neural responses to sensory events in sensory processing areas allows derivation of event timing representations. This would allow higher-level processes such as multisensory integration and action planning to use sensory timing information, without the need for specialized central pacemakers or processes. Using 7T fMRI and neural model-based analyses, we found responses that monotonically increase in amplitude with visual event duration and frequency, becoming increasingly clear from primary visual cortex to lateral occipital visual field maps. Beginning in area MT/V5, we found a gradual transition from monotonic to tuned responses, with response amplitudes peaking at different event timings in different recording sites. While monotonic response components were limited to the retinotopic location of the visual stimulus, timing-tuned response components were independent of the recording sites' preferred visual field positions. These tuned responses formed a network of topographically organized timing maps in superior parietal, postcentral and frontal areas. From anterior to posterior timing maps, multiple events were increasingly integrated, response selectivity narrowed, and responses focused increasingly on the middle of the presented timing range. These results suggest that responses to event timing are transformed from the human brain’s sensory areas to the association cortices, with the event’s temporal properties being increasingly abstracted from the response dynamics and locations of early sensory processing. The resulting abstracted representation of event timing is then propagated through areas implicated in multisensory integration and action planning.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
The 15th David Smith Lecture in Anatomical Neuropharmacology: Professor Tim Bliss, "Memories of long term potentiation
The David Smith Lectures in Anatomical Neuropharmacology, Part of the 'Pharmacology, Anatomical Neuropharmacology and Drug Discovery Seminars Series', Department of Pharmacology, University of Oxford. The 15th David Smith Award Lecture in Anatomical Neuropharmacology will be delivered by Professor Tim Bliss, Visiting Professor at UCL and the Frontier Institutes of Science and Technology, Xi’an Jiaotong University, China, and is hosted by Professor Nigel Emptage. This award lecture was set up to celebrate the vision of Professor A David Smith, namely, that explanations of the action of drugs on the brain requires the definition of neuronal circuits, the location and interactions of molecules. Tim Bliss gained his PhD at McGill University in Canada. He joined the MRC National Institute for Medical Research in Mill Hill, London in 1967, where he remained throughout his career. His work with Terje Lømo in the late 1960’s established the phenomenon of long-term potentiation (LTP) as the dominant synaptic model of how the mammalian brain stores memories. He was elected as a Fellow of the Royal Society in 1994 and is a founding fellow of the Academy of Medical Sciences. He shared the Bristol Myers Squibb award for Neuroscience with Eric Kandel in 1991, the Ipsen Prize for Neural Plasticity with Richard Morris and Yadin Dudai in 2013. In May 2012 he gave the annual Croonian Lecture at the Royal Society on ‘The Mechanics of Memory’. In 2016 Tim, with Graham Collingridge and Richard Morris shared the Brain Prize, one of the world's most coveted science prizes. Abstract: In 1966 there appeared in Acta Physiologica Scandinavica an abstract of a talk given by Terje Lømo, a PhD student in Per Andersen’s laboratory at the University of Oslo. In it Lømo described the long-lasting potentiation of synaptic responses in the dentate gyrus of the anaesthetised rabbit that followed repeated episodes of 10-20Hz stimulation of the perforant path. Thus, heralded and almost entirely unnoticed, one of the most consequential discoveries of 20th century neuroscience was ushered into the world. Two years later I arrived in Oslo as a visiting post-doc from the National Institute for Medical Research in Mill Hill, London. In this talk I recall the events that led us to embark on a systematic reinvestigation of the phenomenon now known as long-term potentiation (LTP) and will then go on to describe the discoveries and controversies that enlivened the early decades of research into synaptic plasticity in the mammalian brain. I will end with an observer’s view of the current state of research in the field, and what we might expect from it in the future.
An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals
Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.
What the fly’s eye tells the fly’s brain…and beyond
Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.
Growing a world-class precision medicine industry
Monash Biomedical Imaging is part of the new $71.2 million Australian Precision Medicine Enterprise (APME) facility, which will deliver large-scale development and manufacturing of precision medicines and theranostic radiopharmaceuticals for industry and research. A key feature of the APME project is a high-energy cyclotron with multiple production clean rooms, which will be located on the Monash Biomedical Imaging (MBI) site in Clayton. This strategic co-location will facilitate radiochemistry, PET and SPECT research and clinical use of theranostic (therapeutic and diagnostic) radioisotopes produced on-site. In this webinar, MBI’s Professor Gary Egan and Dr Maggie Aulsebrook will explain how the APME will secure Australia’s supply of critical radiopharmaceuticals, build a globally competitive Australian manufacturing hub, and train scientists and engineers for the Australian workforce. They will cover the APME’s state-of-the-art 30 MeV and 18-24 MeV cyclotrons and radiochemistry facilities, as well as the services that will be accessible to students, scientists, clinical researchers, and pharmaceutical companies in Australia and around the world. The APME is a collaboration between Monash University, Global Medical Solutions Australia, and Telix Pharmaceuticals. Professor Gary Egan is Director of Monash Biomedical Imaging, Director of the ARC Centre of Excellence for Integrative Brain Function and a Distinguished Professor at the Turner Institute for Brain and Mental Health, Monash University. He is also lead investigator of the Victorian Biomedical Imaging Capability, and Deputy Director of the Australian National Imaging Facility. Dr Maggie Aulsebrook obtained her PhD in Chemistry at Monash University and specialises in the development and clinical translation of radiopharmaceuticals. She has led the development of several investigational radiopharmaceuticals for first-in-human application. Maggie leads the Radiochemistry Platform at Monash Biomedical Imaging.
Synthetic and natural images unlock the power of recurrency in primary visual cortex
During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.
Co-allocation to overlapping dendritic branches in the retrosplenial cortex integrates memories across time
Events occurring close in time are often linked in memory, providing an episodic timeline and a framework for those memories. Recent studies suggest that memories acquired close in time are encoded by overlapping neuronal ensembles, but whether dendritic plasticity plays a role in linking memories is unknown. Using activity-dependent labeling and manipulation, as well as longitudinal one- and two-photon imaging of RSC somatic and dendritic compartments, we show that memory linking is not only dependent on ensemble overlap in the retrosplenial cortex, but also on branch-specific dendritic allocation mechanisms. These results demonstrate a causal role for dendritic mechanisms in memory integration and reveal a novel set of rules that govern how linked, and independent memories are allocated to dendritic compartments.
Timescales of neural activity: their inference, control, and relevance
Timescales characterize how fast the observables change in time. In neuroscience, they can be estimated from the measured activity and can be used, for example, as a signature of the memory trace in the network. I will first discuss the inference of the timescales from the neuroscience data comprised of the short trials and introduce a new unbiased method. Then, I will apply the method to the data recorded from a local population of cortical neurons from the visual area V4. I will demonstrate that the ongoing spiking activity unfolds across at least two distinct timescales - fast and slow - and the slow timescale increases when monkeys attend to the location of the receptive field. Which models can give rise to such behavior? Random balanced networks are known for their fast timescales; thus, a change in the neurons or network properties is required to mimic the data. I will propose a set of models that can control effective timescales and demonstrate that only the model with strong recurrent interactions fits the neural data. Finally, I will discuss the timescales' relevance for behavior and cortical computations.
Geometry of sequence working memory in macaque prefrontal cortex
How the brain stores a sequence in memory remains largely unknown. We investigated the neural code underlying sequence working memory using two-photon calcium imaging to record thousands of neurons in the prefrontal cortex of macaque monkeys memorizing and then reproducing a sequence of locations after a delay. We discovered a regular geometrical organization: The high-dimensional neural state space during the delay could be decomposed into a sum of low-dimensional subspaces, each storing the spatial location at a given ordinal rank, which could be generalized to novel sequences and explain monkey behavior. The rank subspaces were distributed across large overlapping neural groups, and the integration of ordinal and spatial information occurred at the collective level rather than within single neurons. Thus, a simple representational geometry underlies sequence working memory.
Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations
To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.
The ubiquity of opportunity cost: Foraging and beyond
A key insight from the foraging literature is the importance of assessing the overall environmental quality — via global reward rate or similar measures, which capture the opportunity cost of time and can guide behavioral allocation toward relatively richer options. Meanwhile, the majority of research in decision neuroscience and computational psychiatry has focused instead on how choices are guided by much more local, event-locked evaluations: of individual situations, actions, or outcomes. I review a combination of research and theoretical speculation from my lab and others that emphasizes the role of foraging's average rewards and opportunity costs in a much larger range of decision problems, including risk, time discounting, vigor, cognitive control, and deliberation. The broad range of behaviors affected by this type of evaluation gives a new theoretical perspective on the effects of stress and autonomic mobilization, and on mood and the broad range of symptoms associated with mood disorders.
ISYNC: International SynAGE Conference on Healthy Ageing
The SynAGE committee members are thrilled to host ISYNC, the International SynAGE conference on healthy ageing, on 28-30 March 2022 in Magdeburg, Germany. This conference has been entirely organised from young scientists of the SynAGE research training group RTG 2413 (www.synage.de) and represents a unique occasion for researchers from all over the world to bring together and join great talks and sessions with us and our guests. A constantly updated list of our speakers can be found on the conference webpage: www.isync-md.de. During the conference, attendees will have access to a range of symposia which will deal with Glia, Biomarkers and Immunoresponses during ageing to neurodegeneration brain integrity and cognitive function in health and diseases. Moreover, the conference will offer social events especially for young researchers and the possibility to network together in a beautiful and suggestive location where our conference will take place: the Johanniskirche. The event will be happening in person, but due to the current pandemic situation and restrictions we are planning the conference as a hybrid event with lots of technical support to ensure that every participant can follow the talks and take part in the scientific discussions. The registration to our ISYNC conference is free of charge. However, the number of people attending the conference in person is restricted to 100. Afterwards, registrations will be accepted for joining virtually only. The registration is open until 15.02.2022. Especially for PhD and MD Students: Check our available Travel Grants, Poster Prize and SynAGE Award Dinner: https://www.isync-md.de/index.php/phd-md-specials/ If you need any further information don’t hesitate to contact us via email: contact@synage.de. We are looking forward to meet you in 2022 in Magdeburg to discuss about our research and ideas and bless together science. Your ISYNC organization Committee
Neuronal plasticity and neurotrophin signaling as the common mechanism for antidepressant effect
Neuronal plasticity has for a long time been considered important for the recovery from depression and for the antidepressant drug action, but how the drug action is translated to plasticity has remained unclear. Brain-derived neurotrophic factor (BDNF) and its receptor TRKB are critical regulators of neuronal plasticity and have been implicated in the antidepressant action. We have recently found that many, if not all, different antidepressants, including serotonin selective SSRIs, tricyclic as well as fast-acting ketamine, directly bind to TRKB, thereby promoting TRKB translocation to synaptic membranes, which increases BDNF signaling. We have previously shown that antidepressant treatment induces a juvenile-like state of activity in the cortex that facilitates beneficial rewiring of abnormal networks. We recently showed that activation of TRKB receptors in parvalbumin-containing interneurons orchestrates cortical activation states and is both necessary and sufficient for the antidepressantinduced cortical plasticity. Our findings open a new framework how the action of antidepressants act: rather than regulating brain monoamine concentrations, antidepressants directly bind to TRKB and allosterically promote BDNF signaling, thereby inducing a state of plasticity that allows re-wiring of abnormal networks for better functionality.
NaV Long-term Inactivation Regulates Adaptation in Place Cells and Depolarization Block in Dopamine Neurons
In behaving rodents, CA1 pyramidal neurons receive spatially-tuned depolarizing synaptic input while traversing a specific location within an environment called its place. Midbrain dopamine neurons participate in reinforcement learning, and bursts of action potentials riding a depolarizing wave of synaptic input signal rewards and reward expectation. Interestingly, slice electrophysiology in vitro shows that both types of cells exhibit a pronounced reduction in firing rate (adaptation) and even cessation of firing during sustained depolarization. We included a five state Markov model of NaV1.6 (for CA1) and NaV1.2 (for dopamine neurons) respectively, in computational models of these two types of neurons. Our simulations suggest that long-term inactivation of this channel is responsible for the adaptation in CA1 pyramidal neurons, in response to triangular depolarizing current ramps. We also show that the differential contribution of slow inactivation in two subpopulations of midbrain dopamine neurons can account for their different dynamic ranges, as assessed by their responses to similar depolarizing ramps. These results suggest long-term inactivation of the sodium channel is a general mechanism for adaptation.
A novel form of retinotopy in area V2 highlights location-dependent feature selectivity in the visual system
Topographic maps are a prominent feature of brain organization, reflecting local and large-scale representation of the sensory surface. Traditionally, such representations in early visual areas are conceived as retinotopic maps preserving ego-centric retinal spatial location while ensuring that other features of visual input are uniformly represented for every location in space. I will discuss our recent findings of a striking departure from this simple mapping in the secondary visual area (V2) of the tree shrew that is best described as a sinusoidal transformation of the visual field. This sinusoidal topography is ideal for achieving uniform coverage in an elongated area like V2 as predicted by mathematical models designed for wiring minimization, and provides a novel explanation for stripe-like patterns of intra-cortical connections and functional response properties in V2. Our findings suggest that cortical circuits flexibly implement solutions to sensory surface representation, with dramatic consequences for large-scale cortical organization. Furthermore our work challenges the framework of relatively independent encoding of location and features in the visual system, showing instead location-dependent feature sensitivity produced by specialized processing of different features in different spatial locations. In the second part of the talk, I will propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual input, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. The relevant papers can be found here: V2 retinotopy (Sedigh-Sarvestani et al. Neuron, 2021) Location-dependent feature sensitivity (Sedigh-Sarvestani et al. Under Review, 2022)
Towards a More Authentic Vision of the (multi)Coding Potential of RNA
Ten of thousands of open reading frames (ORFs) are hidden within transcripts. They have eluded annotations because they are either small or within unsuspected locations. These are named alternative ORFs (altORFs) or small ORFs and have recently been highlighted by innovative proteogenomic approaches, such as our OpenProt resource, revealing their existence and implications in biological functions. Due to the absence of altORFs from annotations, pathogenic mutations within these are being ignored. I will discuss our latest progress on the re-analysis of large-scale proteomics datasets to improve our knowledge of proteomic diversity, and the functional characterization of a second protein coded by the FUS gene. Finally, I will explain the need to map the coding potential of the transcriptome using artificial intelligence rather than with conventional annotations that do not capture the full translational activity of ribosomes.
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli
Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.
Hippocampal replay reflects specific past experiences rather than a plan for subsequent choice
Executing memory-guided behavior requires storage of information about experience and later recall of that information to inform choices. Awake hippocampal replay, when hippocampal neural ensembles briefly reactivate a representation related to prior experience, has been proposed to critically contribute to these memory-related processes. However, it remains unclear whether awake replay contributes to memory function by promoting the storage of past experiences, facilitating planning based on evaluation of those experiences, or both. We designed a dynamic spatial task that promotes replay before a memory-based choice and assessed how the content of replay related to past and future behavior. We found that replay content was decoupled from subsequent choice and instead was enriched for representations of previously rewarded locations and places that had not been visited recently, indicating a role in memory storage rather than in directly guiding subsequent behavior.
Mice identify subgoals locations through an action-driven mapping process
Mammals instinctively explore and form mental maps of their spatial environments. Models of cognitive mapping in neuroscience mostly depict map-learning as a process of random or biased diffusion. In practice, however, animals explore spaces using structured, purposeful, sensory-guided actions. We have used threat-evoked escape behavior in mice to probe the relationship between ethological exploratory behavior and abstract spatial cognition. First, we show that in arenas with obstacles and a shelter, mice spontaneously learn efficient multi-step escape routes by memorizing allocentric subgoal locations. Using closed-loop neural manipulations to interrupt running movements during exploration, we next found that blocking runs targeting an obstacle edge abolished subgoal learning. We conclude that mice use an action-driven learning process to identify subgoals, and these subgoals are then integrated into an allocentric map-like representation. We suggest a conceptual framework for spatial learning that is compatible with the successor representation from reinforcement learning and sensorimotor enactivism from cognitive science.
CaImAn: large-scale batch and online analysis of calcium imaging data
Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.
NMC4 Short Talk: Two-Photon Imaging of Norepinephrine in the Prefrontal Cortex Shows that Norepinephrine Structures Cell Firing Through Local Release
Norepinephrine (NE) is a neuromodulator that is released from projections of the locus coeruleus via extra-synaptic vesicle exocytosis. Tonic fluctuations in NE are involved in brain states, such as sleep, arousal, and attention. Previously, NE in the PFC was thought to be a homogenous field created by bulk release, but it remains unknown whether phasic (fast, short-term) fluctuations in NE can produce a spatially heterogeneous field, which could then structure cell firing at a fine spatial scale. To understand how spatiotemporal dynamics of norepinephrine (NE) release in the prefrontal cortex affect neuronal firing, we performed a novel in-vivo two-photon imaging experiment in layer ⅔ of the prefrontal cortex using a green fluorescent NE sensor and a red fluorescent Ca2+ sensor, which allowed us to simultaneously observe fine-scale neuronal and NE dynamics in the form of spatially localized fluorescence time series. Using generalized linear modeling, we found that the local NE field differs from the global NE field in transient periods of decorrelation, which are influenced by proximal NE release events. We used optical flow and pattern analysis to show that release and reuptake events can occur at the same location but at different times, and differential recruitment of release and reuptake sites over time is a potential mechanism for creating a heterogeneous NE field. Our generalized linear models predicting cellular dynamics show that the heterogeneous local NE field, and not the global field, drives cell firing dynamics. These results point to the importance of local, small-scale, phasic NE fluctuations for structuring cell firing. Prior research suggests that these phasic NE fluctuations in the PFC may play a role in attentional shifts, orienting to sensory stimuli in the environment, and in the selective gain of priority representations during stress (Mather, Clewett et al. 2016) (Aston-Jones and Bloom 1981).
NMC4 Short Talk: What can 140,000 Reaches Tell Us About Demographic Contributions to Visuomotor Adaptation?
Motor learning is typically assessed in the lab, affording a high degree of control over the task environment. However, this level of control often comes at the cost of smaller sample sizes and a homogenous pool of participants (e.g. college students). To address this, we have designed a web-based motor learning experiment, making it possible to reach a larger, more diverse set of participants. As a proof-of-concept, we collected 1,581 participants completing a visuomotor rotation task, where participants controlled a visual cursor on the screen with their mouse and trackpad. Motor learning was indexed by how fast participants were able to compensate for a 45° rotation imposed between the cursor and their actual movement. Using a cross-validated LASSO regression, we found that motor learning varied significantly with the participant’s age and sex, and also strongly correlated with the location of the target, visual acuity, and satisfaction with the experiment. In contrast, participants' mouse and browser type were features eliminated by the model, indicating that motor performance was not influenced by variations in computer hardware and software. Together, this proof-of-concept study demonstrates how large datasets can generate important insights into the factors underlying motor learning.
NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1
Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.
Targeted Activation of Hippocampal Place Cells Drives Memory-Guided Spatial Behaviour
The hippocampus is crucial for spatial navigation and episodic memory formation. Hippocampal place cells exhibit spatially selective activity within an environment and have been proposed to form the neural basis of a cognitive map of space that supports these mnemonic functions. However, the direct influence of place cell activity on spatial navigation behaviour has not yet been demonstrated. Using an ‘all-optical’ combination of simultaneous two-photon calcium imaging and two-photon holographically targeted optogenetics, we identified and selectively activated place cells that encoded behaviourally relevant locations in a virtual reality environment. Targeted stimulation of a small number of place cells was sufficient to bias the behaviour of animals during a spatial memory task, providing causal evidence that hippocampal place cells actively support spatial navigation and memory. Time permitting, I will also describe new experiments aimed at understanding the fundamental encoding mechanism that supports episodic memory, focussing on the role of hippocampal sequences across multiple timescales and behaviours.
Neural representations of space in the hippocampus of a food-caching bird
Spatial memory in vertebrates requires brain regions homologous to the mammalian hippocampus. Between vertebrate clades, however, these regions are anatomically distinct and appear to produce different spatial patterns of neural activity. We asked whether hippocampal activity is fundamentally different even between distant vertebrates that share a strong dependence on spatial memory. We studied tufted titmice – food-caching birds capable of remembering many concealed food locations. We found mammalian-like neural activity in the titmouse hippocampus, including sharp-wave ripples and anatomically organized place cells. In a non-food-caching bird species, spatial firing was less informative and was exhibited by fewer neurons. These findings suggest that hippocampal circuit mechanisms are similar between birds and mammals, but that the resulting patterns of activity may vary quantitatively with species-specific ethological needs.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Phase precession in the human hippocampus and entorhinal cortex
Knowing where we are, where we have been, and where we are going is critical to many behaviors, including navigation and memory. One potential neuronal mechanism underlying this ability is phase precession, in which spatially tuned neurons represent sequences of positions by activating at progressively earlier phases of local network theta oscillations. Based on studies in rodents, researchers have hypothesized that phase precession may be a general neural pattern for representing sequential events for learning and memory. By recording human single-neuron activity during spatial navigation, we show that spatially tuned neurons in the human hippocampus and entorhinal cortex exhibit phase precession. Furthermore, beyond the neural representation of locations, we show evidence for phase precession related to specific goal states. Our find- ings thus extend theta phase precession to humans and suggest that this phenomenon has a broad func- tional role for the neural representation of both spatial and non-spatial information.
Adaptive bottleneck to pallium for sequence memory, path integration and mixed selectivity representation
Spike-driven adaptation involves intracellular mechanisms that are initiated by neural firing and lead to the subsequent reduction of spiking rate followed by a recovery back to baseline. We report on long (>0.5 second) recovery times from adaptation in a thalamic-like structure in weakly electric fish. This adaptation process is shown via modeling and experiment to encode in a spatially invariant manner the time intervals between event encounters, e.g. with landmarks as the animal learns the location of food. These cells also come in two varieties, ones that care only about the time since the last encounter, and others that care about the history of encounters. We discuss how the two populations can share in the task of representing sequences of events, supporting path integration and converting from ego-to-allocentric representations. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus. Finally we discuss how all the cells of this gateway to the pallium exhibit mixed selectivity of social features of their environment. The data and computational modeling further reveal that, in contrast to a long-held belief, these gymnotiform fish are endowed with a corollary discharge, albeit only for social signalling.
Metachronal waves in swarms of nematode Turbatrix aceti
There is a recent surge of interest in the behavior of active particles that can at the same time align their direction of movement and synchronize their oscillations, known as swarmalators. While analytical and numerical models of such systems are now abundant, no real-life examples have been shown to date. I will present an experimental investigation of the collective motion of the nematode Turbatrix aceti, which self-propel by body undulation. I will show that under favorable conditions these nematodes can synchronize their body oscillations, forming striking traveling metachronal waves which, similar to the case of beating cilia, produce strong fluid flows. I will demonstrate that the location and strength of this collective state can be controlled through the shape of the confining structure; in our case the contact angle of a droplet. This opens a way for producing controlled work such as on-demand flows or displacement of objects. I will illustrate this by a practical example: showing that the force generated by the collectively moving nematodes is sufficient to change the mode of evaporation of fluid droplets, by counteracting the surface-tension force, which allow us to estimate its strength.
Space wrapped onto a grid cell torus
Entorhinal grid cells, so-called because of their hexagonally tiled spatial receptive fields, are organized in modules which, collectively, are believed to form a population code for the animal’s position. Here, we apply topological data analysis to simultaneous recordings of hundreds of grid cells and show that joint activity of grid cells within a module lies on a toroidal manifold. Each position of the animal in its physical environment corresponds to a single location on the torus, and each grid cell is preferentially active within a single “field” on the torus. Toroidal firing positions persist between environments, and between wakefulness and sleep, in agreement with continuous attractor models of grid cells.
Rastermap: Extracting structure from high dimensional neural data
Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.
Physical Computation in Insect Swarms
Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.
“Wasn’t there food around here?”: An Agent-based Model for Local Search in Drosophila
The ability to keep track of one’s location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies’ decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.
The role of motion in localizing objects
Everything we see has a location. We know where things are before we know what they are. But how do we know where things are? Receptive fields in the visual system specify location but neural delays lead to serious errors whenever targets or eyes are moving. Motion may be the problem here but motion can also be the solution, correcting for the effects of delays and eye movements. To demonstrate this, I will present results from three motion illusions where perceived location differs radically from physical location. These help understand how and where position is coded. We first look at the effects of a target’s simple forward motion on its perceived location. Second, we look at perceived location of a target that has internal motion as well as forward motion. The two directions combine to produce an illusory path. This “double-drift” illusion strongly affects perceived position but, surprisingly, not eye movements or attention. Even more surprising, fMRI shows that the shifted percept does not emerge in the visual cortex but is seen instead in the frontal lobes. Finally, we report that a moving frame also shifts the perceived positions of dots flashed within it. Participants report the dot positions relative to the frame, as if the frame were not moving. These frame-induced position effects suggest a link to visual stability where we see a steady world despite massive displacements during saccades. These motion-based effects on perceived location lead to new insights concerning how and where position is coded in the brain.
Reverse engineering recurrent network models reveals mechanisms for location memory
Bernstein Conference 2024
Imagining what was there: looking at an absent offer location modulates neural responses in OFC
COSYNE 2022
Imagining what was there: looking at an absent offer location modulates neural responses in OFC
COSYNE 2022
Mice identify subgoal locations through an action-driven mapping process
COSYNE 2022
Mice identify subgoal locations through an action-driven mapping process
COSYNE 2022
Optimal dynamic allocation of finite resources for many-alternatives decision-making
COSYNE 2022
Optimal dynamic allocation of finite resources for many-alternatives decision-making
COSYNE 2022
Place Cells are Clustered by Field Location in CA1 Hippocampus
COSYNE 2023
A pre-cerebellar brainstem integrator implements self-location memory and enables positional homeostasis
COSYNE 2023
The tilt illusion arises from an efficient reallocation of neural coding resources at the contextual boundary
COSYNE 2025
An attempt to elucidate how people perceive self-location
FENS Forum 2024
Fatigue behavior as an optimal allocation strategy for limited resources
FENS Forum 2024
An interneuron lineage comprising somatotopically organized subpopulations that elicit grooming of different locations on the head
FENS Forum 2024
Purkinje cell vulnerability depends on zebrin molecular identity and cerebellar location in a Christianson syndrome mouse model
FENS Forum 2024
Reverse engineering recurrent network models reveals mechanisms for location memory
FENS Forum 2024
Role of ionotropic glutamate receptors in multimodal learning of pheromone locations
FENS Forum 2024
Shedding light on object location recall: Optogenetic priming of the HIP-mPFC pathway for object-location memory
FENS Forum 2024
Temporal dynamics of neuronal excitability in the lateral amygdala mediates allocation to an engram supporting conditioned fear memory
FENS Forum 2024
Touching what you see: Multisensory location coding in mouse posterior parietal cortex
FENS Forum 2024