Memories
memories
Memory Decoding Journal Club: Neocortical synaptic engrams for remote contextual memories
Neocortical synaptic engrams for remote contextual memories
Neural circuits underlying sleep structure and functions
Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Memory formation in hippocampal microcircuit
The centre of memory is the medial temporal lobe (MTL) and especially the hippocampus. In our research, a more flexible brain-inspired computational microcircuit of the CA1 region of the mammalian hippocampus was upgraded and used to examine how information retrieval could be affected under different conditions. Six models (1-6) were created by modulating different excitatory and inhibitory pathways. The results showed that the increase in the strength of the feedforward excitation was the most effective way to recall memories. In other words, that allows the system to access stored memories more accurately.
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Hippocampal sharp wave ripples for selection and consolidation of memories
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory
Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM
Targeting Maladaptive Emotional Memories to Treat Mental Health Disorders: Insights from Rodent Models
Maladaptive emotional memories contribute to the persistence of numerous mental health disorders, including post-traumatic stress disorder (PTSD), drug addiction and obsessive-compulsive disorder (OCD). Using rodent behavioural models of the psychological processes relevant to these disorders, it is possible to identify potential treatment targets for the development of new therapies, including those based upon disrupting the reconsolidation of maladaptive emotional memories. Using examples from rodent models relevant to multiple mental health disorders, this talk will consider some of the opportunities and challenges that this approach provides.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
Molecular Memories
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
The 15th David Smith Lecture in Anatomical Neuropharmacology: Professor Tim Bliss, "Memories of long term potentiation
The David Smith Lectures in Anatomical Neuropharmacology, Part of the 'Pharmacology, Anatomical Neuropharmacology and Drug Discovery Seminars Series', Department of Pharmacology, University of Oxford. The 15th David Smith Award Lecture in Anatomical Neuropharmacology will be delivered by Professor Tim Bliss, Visiting Professor at UCL and the Frontier Institutes of Science and Technology, Xi’an Jiaotong University, China, and is hosted by Professor Nigel Emptage. This award lecture was set up to celebrate the vision of Professor A David Smith, namely, that explanations of the action of drugs on the brain requires the definition of neuronal circuits, the location and interactions of molecules. Tim Bliss gained his PhD at McGill University in Canada. He joined the MRC National Institute for Medical Research in Mill Hill, London in 1967, where he remained throughout his career. His work with Terje Lømo in the late 1960’s established the phenomenon of long-term potentiation (LTP) as the dominant synaptic model of how the mammalian brain stores memories. He was elected as a Fellow of the Royal Society in 1994 and is a founding fellow of the Academy of Medical Sciences. He shared the Bristol Myers Squibb award for Neuroscience with Eric Kandel in 1991, the Ipsen Prize for Neural Plasticity with Richard Morris and Yadin Dudai in 2013. In May 2012 he gave the annual Croonian Lecture at the Royal Society on ‘The Mechanics of Memory’. In 2016 Tim, with Graham Collingridge and Richard Morris shared the Brain Prize, one of the world's most coveted science prizes. Abstract: In 1966 there appeared in Acta Physiologica Scandinavica an abstract of a talk given by Terje Lømo, a PhD student in Per Andersen’s laboratory at the University of Oslo. In it Lømo described the long-lasting potentiation of synaptic responses in the dentate gyrus of the anaesthetised rabbit that followed repeated episodes of 10-20Hz stimulation of the perforant path. Thus, heralded and almost entirely unnoticed, one of the most consequential discoveries of 20th century neuroscience was ushered into the world. Two years later I arrived in Oslo as a visiting post-doc from the National Institute for Medical Research in Mill Hill, London. In this talk I recall the events that led us to embark on a systematic reinvestigation of the phenomenon now known as long-term potentiation (LTP) and will then go on to describe the discoveries and controversies that enlivened the early decades of research into synaptic plasticity in the mammalian brain. I will end with an observer’s view of the current state of research in the field, and what we might expect from it in the future.
Meta-learning synaptic plasticity and memory addressing for continual familiarity detection
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.
Co-allocation to overlapping dendritic branches in the retrosplenial cortex integrates memories across time
Events occurring close in time are often linked in memory, providing an episodic timeline and a framework for those memories. Recent studies suggest that memories acquired close in time are encoded by overlapping neuronal ensembles, but whether dendritic plasticity plays a role in linking memories is unknown. Using activity-dependent labeling and manipulation, as well as longitudinal one- and two-photon imaging of RSC somatic and dendritic compartments, we show that memory linking is not only dependent on ensemble overlap in the retrosplenial cortex, but also on branch-specific dendritic allocation mechanisms. These results demonstrate a causal role for dendritic mechanisms in memory integration and reveal a novel set of rules that govern how linked, and independent memories are allocated to dendritic compartments.
Brain-body interactions that modulate fear
In most animals including in humans, emotions occur together with changes in the body, such as variations in breathing or heart rate, sweaty palms, or facial expressions. It has been suggested that this interoceptive information acts as a feedback signal to the brain, enabling adaptive modulation of emotions that is essential for survival. As such, fear, one of our basic emotions, must be kept in a functional balance to minimize risk-taking while allowing for the pursuit of essential needs. However, the neural mechanisms underlying this adaptive modulation of fear remain poorly understood. In this talk, I want to present and discuss the data from my PhD work where we uncover a crucial role for the interoceptive insular cortex in detecting changes in heart rate to maintain an equilibrium between the extinction and maintenance of fear memories in mice.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
NMC4 Short Talk: Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance
Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and non-sparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their sub-types uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory sub-populations – one connected to excitatory assemblies locally and the other connected globally – can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model therefore highlights a biologically-plausible and behaviourally-useful function of inhibitory diversity in memory.
Learning and updating structured knowledge
During our everyday lives, much of what we experience is familiar and predictable. We typically follow the same morning routine, take the same route to work, and encounter the same colleagues. However, every once in a while, we encounter a surprising event that violates our expectations. When we encounter such violations of our expectations, it is adaptive to update our internal model of the world in order to make better predictions in the future. The hippocampus is thought to support both the learning of the predictable structure of our environment, as well as the detection and encoding of violations. However, the hippocampus is a complex and heterogeneous structure, composed of different subfields that are thought to subserve different functions. As such, it is not yet known how the hippocampus accomplishes the learning and updating of structured knowledge. Using behavioral methods and high-resolution fMRI, I'll show that during learning of repeated and predicted events, hippocampal subfields differentially integrate and separate event representations, thus learning the structure of ongoing experience. I then move on to discuss how when events violate our predictions, there is a shift in communication between hippocampal subfields, potentially allowing for efficient encoding of the novel and surprising information. If time permits, I'll present an additional behavioral study showing that violations of predictions promote detailed memories. Together, these studies advance our understanding of how we adaptively learn and update our knowledge.
Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations
Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.
Making memories in mice
Understanding how the brain uses information is a fundamental goal of neuroscience. Several human disorders (ranging from autism spectrum disorder to PTSD to Alzheimer’s disease) may stem from disrupted information processing. Therefore, this basic knowledge is not only critical for understanding normal brain function, but also vital for the development of new treatment strategies for these disorders. Memory may be defined as the retention over time of internal representations gained through experience, and the capacity to reconstruct these representations at later times. Long-lasting physical brain changes (‘engrams’) are thought to encode these internal representations. The concept of a physical memory trace likely originated in ancient Greece, although it wasn’t until 1904 that Richard Semon first coined the term ‘engram’. Despite its long history, finding a specific engram has been challenging, likely because an engram is encoded at multiple levels (epigenetic, synaptic, cell assembly). My lab is interested in understanding how specific neurons are recruited or allocated to an engram, and how neuronal membership in an engram may change over time or with new experience. Here I will describe both older and new unpublished data in our efforts to understand memories in mice.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
Handling multiple memories in the hippocampus network
Neural mechanisms for memory and emotional processing during sleep
The hippocampus and the amygdala are two structures required for emotional memory. While the hippocampus encodes the contextual part of the memory, the amygdala processes its emotional valence. During Non-REM sleep, the hippocampus displays high frequency oscillations called “ripples”. Our early work shows that the suppression of ripples during sleep impairs performance on a spatial task, underlying their crucial role in memory consolidation. We more recently showed that the joint amygdala-hippocampus activity linked to aversive learning is reinstated during the following Non-REM sleep epochs, specifically during ripples. This mechanism potentially sustains the consolidation of aversive associative memories during Non REM sleep. On the other hand, REM sleep is associated with regular 8 Hz theta oscillations, and is believed to play a role in emotional processing. A crucial, initial step in understanding this role is to unravel sleep dynamics related to REM sleep in the hippocampus-amygdala network
Co-tuned, balanced excitation and inhibition in olfactory memory networks
Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.
A Changing View of Vision: From Molecules to Behavior in Zebrafish
All sensory perception and every coordinated movement, as well as feelings, memories and motivation, arise from the bustling activity of many millions of interconnected cells in the brain. The ultimate function of this elaborate network is to generate behavior. We use zebrafish as our experimental model, employing a diverse array of molecular, genetic, optical, connectomic, behavioral and computational approaches. The goal of our research is to understand how neuronal circuits integrate sensory inputs and internal state and convert this information into behavioral responses.
Herbert Jasper Lecture
There is a long-standing tension between the notion that the hippocampal formation is essentially a spatial mapping system, and the notion that it plays an essential role in the establishment of episodic memory and the consolidation of such memory into structured knowledge about the world. One theory that resolves this tension is the notion that the hippocampus generates rather arbitrary 'index' codes that serve initially to link attributes of episodic memories that are stored in widely dispersed and only weakly connected neocortical modules. I will show how an essentially 'spatial' coding mechanism, with some tweaks, provides an ideal indexing system and discuss the neural coding strategies that the hippocampus apparently uses to overcome some biological constraints affecting the possibility of shipping the index code out widely to the neocortex. Finally, I will present new data suggesting that the hippocampal index code is indeed transferred to layer II-III of the neocortex.
Exploring the neural landscape of imagination and abstract spaces
External cues imbued with significance can enhance the motivational state of an organism, trigger related memories and influence future planning and goal directed behavior. At the same time, internal thought and imaginings can moderate and counteract the impact of external motivational cues. The neural underpinnings of imagination have been largely opaque, due to the inherent inaccessibility of mental actions. The talk will describe studies utilizing imagination and tracking how its neural correlates bidirectionally interact with external motivational cues. Stimulus-response associative learning is only one form of memory organization. A more comprehensive and efficient organizational principal is the cognitive map. In the last part of the talk we will examine this concept in the case of abstract memories and social space. Social encounters provide opportunities to become intimate or estranged from others and to gain or lose power over them. The locations of others on the axes of power and affiliation can serve as reference points for our own position in the social space. Research is beginning to uncover the spatial-like neural representation of these social coordinates. We will discuss recent and growing evidence on utilizing the principals of the cognitive map across multiple domains, providing a systematic way of organizing memories to navigate life.
Astrocytes contribute to remote memory formation by modulating hippocampal-cortical communication during learning
How is it that some memories fade in a day while others last forever? The formation of long-lasting (remote) memories depends on the coordinated activity between the hippocampus and frontal cortices, but the timeline of these interactions is debated. Astrocytes, star-shaped glial cells, sense and modify neuronal activity, but their role in remote memory is scarcely explored. We manipulated the activity of hippocampal astrocytes during memory acquisition and discovered it impaired remote, but not recent, memory retrieval. We also revealed a massive recruitment of cortical-projecting hippocampal neurons during memory acquisition, a process that is specifically inhibited by astrocytic manipulation. Finally, we directly inhibited this projection during memory acquisition to prove its necessity for the formation of remote memory. Our findings reveal that the foundation of remote memory can be established during acquisition with projection-specific effect of astrocytes.
Anatomical decision-making by cellular collectives: bioelectrical pattern memories, regeneration, and synthetic living organisms
A key question for basic biology and regenerative medicine concerns the way in which evolution exploits physics toward adaptive form and function. While genomes specify the molecular hardware of cells, what algorithms enable cellular collectives to reliably build specific, complex, target morphologies? Our lab studies the way in which all cells, not just neurons, communicate as electrical networks that enable scaling of single-cell properties into collective intelligences that solve problems in anatomical feature space. By learning to read, interpret, and write bioelectrical information in vivo, we have identified some novel controls of growth and form that enable incredible plasticity and robustness in anatomical homeostasis. In this talk, I will describe the fundamental knowledge gaps with respect to anatomical plasticity and pattern control beyond emergence, and discuss our efforts to understand large-scale morphological control circuits. I will show examples in embryogenesis, regeneration, cancer, and synthetic living machines. I will also discuss the implications of this work for not only regenerative medicine, but also for fundamental understanding of the origin of bodyplans and the relationship between genomes and functional anatomy.
Exploring Memories of Scenes
State-of-the-art machine vision models can predict human recognition memory for complex scenes with astonishing accuracy. In this talk I present work that investigated how memorable scenes are actually remembered and experienced by human observers. We found that memorable scenes were recognized largely based on recollection of specific episodic details but also based on familiarity for an entire scene. I thus highlight current limitations in machine vision models emulating human recognition memory, with promising opportunities for future research. Moreover, we were interested in what observers specifically remember about complex scenes. We thus considered the functional role of eye-movements as a window into the content of memories, particularly when observers recollected specific information about a scene. We found that when observers formed a memory representation that they later recollected (compared to scenes that only felt familiar), the overall extent of exploration was broader, with a specific subset of fixations clustered around later to-be-recollected scene content, irrespective of the memorability of a scene. I discuss the critical role that our viewing behavior plays in visual memory formation and retrieval and point to potential implications for machine vision models predicting the content of human memories.
A distinct subcircuit in medial entorhinal cortex mediates learning of interval timing behavior during immobility
Over 60 years of research has established that medial temporal lobe structures, including the hippocampus and entorhinal cortex, are necessary for the formation of episodic memories (i.e. memories of specific personal events that occur in spatial and temporal context). While prior work to establish the neural mechanisms underlying episodic memory has largely focused on questions related spatial context, recently we have begun to investigate how these brain structures could be involved in encoding aspects of temporal context. In particular, we have focused on how medial entorhinal cortex, a structure well known for its role in spatial memory, may also be involved in encoding interval time. To answer this question we have developed an instrumental paradigm for head-fixed mice that requires both immobile interval timing and locomotion-dependent navigation behavior. By combining this behavioral paradigm with large-scale cellular resolution functional imaging and optogenetic-mediated inactivation, our results suggest that MEC is required for learning of interval timing behavior and that interval timing could be mediated through regular, sequential neural activity of a distinct subpopulation of neurons in MEC that encode elapsed time during periods of immobility (Heys and Dombeck, 2018; Heys et al, 2020; Issa et al., 2020). In this talk, I will discuss these findings and discuss our on-going work to investigate the principles underlying the role of medial temporal lobe structures in timing behavior and episodic memory.
Interacting synapses stabilise both learning and neuronal dynamics in biological networks
Distinct synapses influence one another when they undergo changes, with unclear consequences for neuronal dynamics and function. Here we show that synapses can interact such that excitatory currents are naturally normalised and balanced by inhibitory inputs. This happens when classical spike-timing dependent synaptic plasticity rules are extended by additional mechanisms that incorporate the influence of neighbouring synaptic currents and regulate the amplitude of efficacy changes accordingly. The resulting control of excitatory plasticity by inhibitory activation, and vice versa, gives rise to quick and long-lasting memories as seen experimentally in receptive field plasticity paradigms. In models with additional dendritic structure, we observe experimentally reported clustering of co-active synapses that depends on initial connectivity and morphology. Finally, in recurrent neural networks, rich and stable dynamics with high input sensitivity emerge, providing transient activity that resembles recordings from the motor cortex. Our model provides a general framework for codependent plasticity that frames individual synaptic modifications in the context of population-wide changes, allowing us to connect micro-level physiology with behavioural phenomena.
Conflict or complement: Parallel memories control behaviour in Drosophila
Drosophila can learn to associate odours with reward or punishment and the resulting memories direct odour-specific approach or avoidance behaviours. Recent progress has revealed a straightforward model for learning in which reinforcing dopaminergic neurons assign valence to odour representations in the neural ensemble of the mushroom bodies. Dopamine directed synaptic depression alters the route of odour-driven activity through the mushroom body output network. This circuit configuration and influence of internal state guide the expression of appropriate behaviour. Importantly, learned behaviour is flexible and can be updated as the fly accumulates additional experience. Our latest studies demonstrate that well-informed behaviour is guided by combining parallel conflicting and complementary memories of opposite valence.
Making Memories in Mice
Microglia, memories, and the extracellular space
Microglia are the immune cells of the brain, and play increasingly appreciated roles in synapse formation, brain plasticity, and cognition. A growing appreciation that the immune system involved in diseases like schizophrenia, epilepsy, and neurodegenerative diseases has led to renewed interest in how microglia regulate synaptic connectivity. Our group previously identified the IL-1 family cytokine Interleukin-33 (IL-33) as a novel regulator of microglial activation and function. I will discuss a mechanism by which microglia regulate synaptic plasticity and long-term memories by engulfing brain extracellular matrix (ECM) proteins. These studies raise the question of how these pathways may be altered or could be modified in the context of disease.
The Cognitive Map Theory – 40 Years On
John O’Keefe is a Professor of Cognitive Neuroscience at UCL and he received the Nobel Prize in Physiology or Medicine in 2014 for his “discoveries of cells that constitute a positioning system in the brain". His revolutionary research on hippocampal place cells provided deeper insight into the neural processes underlying the sense of space. His lab in Sainsbury Wellcome Centre applies a wide range of methods to facilitate our understanding of the role of the entorhinal cortex and hippocampus in spatial memory and the neural mechanisms underlying short-term memories in the amygdala.
The When, Where and What of visual memory formation
The eyes send a continuous stream of about two million nerve fibers to the brain, but only a fraction of this information is stored as visual memories. This talk will detail three neurocomputational models that attempt an understanding how the visual system makes on-the-fly decisions about how to encode that information. First, the STST family of models (Bowman & Wyble 2007; Wyble, Potter, Bowman & Nieuwenstein 2011) proposes mechanisms for temporal segmentation of continuous input. The conclusion of this work is that the visual system has mechanisms for rapidly creating brief episodes of attention that highlight important moments in time, and also separates each episode from temporally adjacent neighbors to benefit learning. Next, the RAGNAROC model (Wyble et al. 2019) describes a decision process for determining the spatial focus (or foci) of attention in a spatiotopic field and the neural mechanisms that provide enhancement of targets and suppression of highly distracting information. This work highlights the importance of integrating behavioral and electrophysiological data to provide empirical constraints on a neurally plausible model of spatial attention. The model also highlights how a neural circuit can make decisions in a continuous space, rather than among discrete alternatives. Finally, the binding pool (Swan & Wyble 2014; Hedayati, O’Donnell, Wyble in Prep) provides a mechanism for selectively encoding specific attributes (i.e. color, shape, category) of a visual object to be stored in a consolidated memory representation. The binding pool is akin to a holographic memory system that layers representations of select latent representations corresponding to different attributes of a given object. Moreover, it can bind features into distinct objects by linking them to token placeholders. Future work looks toward combining these models into a coherent framework for understanding the full measure of on-the-fly attentional mechanisms and how they improve learning.
Cellular mechanisms of conscious perception
Arguably one of the biggest mysteries in neuroscience is how the brain stores long-term memories. The major challenge for investigating the neural circuit underlying memory formation in the neocortex is the distributed nature of the resulting memory trace throughout the cortex. Here, we used a new behavioral paradigm that enabled us to generate memory traces in a specific cortical location and to specifically examine the mechanisms of memory formation in that region. We found that medial-temporal inputs arrive in neocortical layer 1 where the apical dendrites of cortical pyramidal neurons predominate. These dendrites have active properties that make them sensitive to contextual inputs from other areas that also send axons to layer 1 around the cortex. Blocking the influence of these medial-temporal inputs prevented learning and suppressed resulting dendritic activity. We conclude that layer 1 is the locus for hippocampal-dependent memory formation in the neocortex and propose that this process enhances the sensitivity of the tuft dendrites to contextual inputs.
How Memory Guides Value-Based Decisions
From robots to humans, the ability to learn from experience turns a rigid response system into a flexible, adaptive one. In this talk, I will discuss emerging findings regarding the neural and cognitive mechanisms by which learning shapes decisions. The lecture will focus on how multiple brain regions interact to support learning, what this means for how memories are built, and the consequences for how decisions are made. Results emerging from this work challenge the traditional view of separate learning systems and advance understanding of how memory biases decisions in both adaptive and maladaptive ways.
From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories
Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.
The Role of Hippocampal Replay in Memory Consolidation
The hippocampus lies at the centre of a network of brain regions thought to support spatial and episodic memory. Place cells - the principal cell of the hippocampus, represent information about an animal’s spatial location. Yet, during rest and awake quiescence place cells spontaneously recapitulate past trajectories (‘replay’). Replay has been hypothesised to support systems consolidation – the stabilisation of new memories via maturation of complementary cortical memory traces. Indeed, in recent work we found place and grid cells, from the deep medial entorhinal cortex (dMEC, the principal cortical output region of the hippocampus), replayed coherently during rest periods. Importantly, dMEC grid cells lagged place cells by ~11ms; suggesting the coordination may reflect consolidation. Moreover, preliminary data shows that the dMEC-hippocampal coordination strengthens as an animal becomes familiar with a task and that it may be led by directionally modulated cells. Finally, on-going work, in my recently established lab, shows replay may represent the mechanism underlying the maturation of episodic/spatial memory in pre-weanling pups. Together, these results indicate replay may play a central role in ensuring the permanency of memories.
The Gist of False Memory
It has long been known that when viewing a set of images, we misjudge individual elements as being closer to the mean than they are (Hollingworth, 1910) and recall seeing the (absent) set mean (Deese, 1959; Roediger & McDermott (1995). Recent studies found that viewing sets of images, simultaneously or sequentially, leads to perception of set statistics (mean, range) with poor memory for individual elements. Ensemble perception was found for sets of simple images (e.g. circles varying in size or brightness; lines of varying orientation), complex objects (e.g. faces of varying emotion), as well as for objects belonging to the same category. When the viewed set does not include its mean or prototype, nevertheless, observers report and act as if they have seen this central image or object – a form of false memory. Physiologically, detailed sensory information at cortical input levels is processed hierarchically to form an integrated scene gist at higher levels. However, we are aware of the gist before the details. We propose that images and objects belonging to a set or category are represented as their gist, mean or prototype, plus individual differences from that gist. Under constrained viewing conditions, only the gist is perceived and remembered. This theory also provides a basis for compressed neural representation. Extending this theory to scenes and episodes supplies a generalized basis for false memories. They seem right, match generalized expectations, so are believable without challenging examination. This theory could be tested by analyzing the typicality of false memories, compared to rejected alternatives.
Multiple maps for navigation
Over the last several decades, the tractable response properties of parahippocampal neurons have provided a new access key to understanding the cognitive process of self-localization: the ability to know where you are currently located in space. Defined by functionally discrete response properties, neurons in the medial entorhinal cortex and hippocampus are proposed to provide the basis for an internal neural map of space, which enables animals to perform path-integration based spatial navigation and supports the formation of spatial memories. My lab focuses on understanding the mechanisms that generate this neural map of space and how this map is used to support behavior. In this talk, I’ll discuss how learning and experience shapes our internal neural maps of space to guide behavior.
Contextual inference underlies the learning of sensorimotor repertoires
Humans spend a lifetime learning, storing and refining a repertoire of motor memories. However, it is unknown what principle underlies the way our continuous stream of sensori-motor experience is segmented into separate memories and how we adapt and use this growing repertoire. Here we develop a principled theory of motor learning based on the key insight that memory creation, updating, and expression are all controlled by a single computation – contextual inference. Unlike dominant theories of single-context learning, our repertoire-learning model accounts for key features of motor learning that had no unified explanation and predicts novel phenomena, which we confirm experimentally. These results suggest that contextual inference is the key principle underlying how a diverse set of experiences is reflected in motor behavior.
Brain dynamics underlying memory for continuous natural events
The world confronts our senses with a continuous stream of rapidly changing information. Yet, we experience life as a series of episodes or events, and in memory these pieces seem to become even further organized. How do we recall and give structure to this complex information? Recent studies have begun to examine these questions using naturalistic stimuli and behavior: subjects view audiovisual movies and then freely recount aloud their memories of the events. We find brain activity patterns that are unique to individual episodes, and which reappear during verbal recollection; robust generalization of these patterns across people; and memory effects driven by the structure of links between events in a narrative. These findings construct a picture of how we comprehend and recall real-world events that unfold continuously across time.
How sleep remodels the brain
50 years ago it was found that sleep somehow made memories better and more permanent, but neither sleep nor memory researchers knew enough about sleep and memory to devise robust, effective tests. Today the fields of sleep and memory have grown and what is now understood is astounding. Still, great mysteries remain. What is the functional difference between the subtly different slow oscillation vs the slow wave of sleep and do they really have opposite memory consolidation effects? How do short spindles (e.g. <0.5 s as in schizophrenia) differ in function from longer ones and are longer spindles key to integrating new memories with old? Is the nesting of slow oscillations together with sleep spindles and hippocampal ripples necessary? What happens if all else is fine but the neurochemical environment is altered? Does sleep become maladaptive and “cement” memories into the hippocampal warehouse where they are assembled, together with all of their emotional baggage? Does maladaptive sleep underlie post-traumatic stress disorder and other stress-related disorders? How do we optimize sleep characteristics for top emotional and cognitive function? State of the art findings and current hypotheses will be presented.
Agency in the Stream of Consciousness: Perspectives from Cognitive Science and Buddhist Psychology
The stream of consciousness refers to ideas, images, and memories that meander across the mind when we are otherwise unoccupied. The standard view is that these thoughts are associationistic in character and they arise from subpersonal processes—we are for the most part passive observers of them. Drawing on a series of laboratory studies we have conducted as well as Buddhist models of mind, I argue that these views are importantly incorrect. On the alternative view I put forward, these thoughts arise from minimal decision processes, which lie in a grey zone: They are both manifestations of agency as well as obstacles to it.
The recruitment of spatial cells in large-scale space & an AI approach to neural discovery
Prof Caswell Barry, Professorial Research Fellow, Cell & Developmental Biology, Division of Biosciences, University College London. He and his team are trying to understand how the brain works - how it creates that experience of being human, and more specifically, how the brain creates, stores, and updates memories for places and events. They are trying to answer this is by studying areas of the brain linked to memory, the hippocampus and associated sections of cortex – by recording the activity of neurons in these areas we can visualise and hopefully understand the processes the trigger memory formation and retrieval.
Code reversal between stimulus processing and fading memories in primate V1
Bernstein Conference 2024
Cortico-cortical feedback to visual areas can explain reactivation of latent memories during working memory retention
Bernstein Conference 2024
Neural code for episodic memories in a food-caching bird
Bernstein Conference 2024
Multiscale encodings of memories in hippocampal and artificial networks
COSYNE 2022
Multiscale encodings of memories in hippocampal and artificial networks
COSYNE 2022
Stable memories without reactivation
COSYNE 2022
Stable memories without reactivation
COSYNE 2022
Dopamine projections to the basolateral amygdala drive the encoding of identity-specific reward memories
COSYNE 2023
Cascading memory search as a bridge between episodic memories and semantic knowledge
FENS Forum 2024
CRISPR-based epigenetic editing of engram cells in fear memories
FENS Forum 2024
Disentangling emotional memories in ventral hippocampal circuits
FENS Forum 2024
Drifting memories: Sleep stages play opposite roles in reshaping memory representations
FENS Forum 2024
Indirect modulation of negative visual memories
FENS Forum 2024
Inhibitory plasticity supports consolidation of generalizable memories
FENS Forum 2024
How many short-term memories become long-term? Unveiling the answer through the study of sex differences
FENS Forum 2024
Memories by a thousand rules: Meta-learning plasticity rules for memory formation and recall in large spiking networks
FENS Forum 2024
Network mechanisms for the reinstatement of infant latent memories
FENS Forum 2024
A neural network model that learns to encode and retrieve memories for spatial navigation
FENS Forum 2024
Neuronal ensembles of alcohol memories in the nucleus accumbens express a unique transcriptional fingerprint
FENS Forum 2024
NT3-TrkC signaling in the brain fear network underlies inter-individual differences in the formation and maintenance of contextual fear extinction memories
FENS Forum 2024
PSD-95-dependent synaptic transmission in the dorsal CA1 area (dCA1) of the hippocampus is required for updating, but not formation, of contextual memories
FENS Forum 2024
Restoring 'lost' memories: Efficacy of vardenafil to reverse amnesia following sleep deprivation
FENS Forum 2024
Retrieving fear memories through the basal amygdala-accumbens pathway
FENS Forum 2024
The role of local and long-range mPFC connections in the consolidation of memories
FENS Forum 2024
Semantic memories are not generated from multiple episodic memories
FENS Forum 2024
Silent synapse-based mechanism to manipulate drug-associated memories
FENS Forum 2024