← Back

Memory Encoding

Topic spotlight
TopicWorld Wide

memory encoding

Discover seminars, jobs, and research tagged with memory encoding across World Wide.
14 curated items7 Seminars7 ePosters
Updated 25 days ago
14 items · memory encoding
14 results
SeminarNeuroscience

Top-down control of neocortical threat memory

Prof. Dr. Johannes Letzkus
Universität Freiburg, Germany
Nov 11, 2025

Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.

SeminarNeuroscience

Circuit Mechanisms of Remote Memory

Lauren DeNardo, PhD
Department of Physiology, David Geffen School of Medicine, UCLA
Feb 10, 2025

Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.

SeminarNeuroscienceRecording

Meta-learning synaptic plasticity and memory addressing for continual familiarity detection

Danil Tyulmankov
Columbia University
May 17, 2022

Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.

SeminarNeuroscienceRecording

Learning and updating structured knowledge

Oded Bein
Niv lab, Princeton University
Oct 5, 2021

During our everyday lives, much of what we experience is familiar and predictable. We typically follow the same morning routine, take the same route to work, and encounter the same colleagues. However, every once in a while, we encounter a surprising event that violates our expectations. When we encounter such violations of our expectations, it is adaptive to update our internal model of the world in order to make better predictions in the future. The hippocampus is thought to support both the learning of the predictable structure of our environment, as well as the detection and encoding of violations. However, the hippocampus is a complex and heterogeneous structure, composed of different subfields that are thought to subserve different functions. As such, it is not yet known how the hippocampus accomplishes the learning and updating of structured knowledge. Using behavioral methods and high-resolution fMRI, I'll show that during learning of repeated and predicted events, hippocampal subfields differentially integrate and separate event representations, thus learning the structure of ongoing experience. I then move on to discuss how when events violate our predictions, there is a shift in communication between hippocampal subfields, potentially allowing for efficient encoding of the novel and surprising information. If time permits, I'll present an additional behavioral study showing that violations of predictions promote detailed memories. Together, these studies advance our understanding of how we adaptively learn and update our knowledge.

SeminarPsychology

Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations

Brad Wyble
Penn State University
Jul 6, 2021

Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.

ePoster

Modeling competitive memory encoding using a Hopfield network

Julia Pronoza, Sen Cheng

Bernstein Conference 2024

ePoster

Theta-modulated memory encoding and retrieval in recurrent hippocampal circuits

Samuel Eckmann, Yashar Ahmadian, Máté Lengyel

Bernstein Conference 2024

ePoster

Deciphering the dynamics of memory encoding and recall in the hippocampus using two-photon calcium imaging and information theory

Jess Yu, Mary Ann Go, Yujie Lu, Simon R Schultz

FENS Forum 2024

ePoster

Global and local functional connectivity hubs for verbal memory encoding

Barbora Matouskova, Petr Klimes, Jan Cimbalnik, Michal Kucewitz

FENS Forum 2024

ePoster

Human single neurons lock to theta phases during memory encoding and retrieval

Tim Guth, Armin Brandt, Peter Reinacher, Andreas Schulze-Bonhage, Joshua Jacobs, Lukas Kunz

FENS Forum 2024

ePoster

Presynaptic plasticity and memory encoding in hippocampal circuits

Catherine Marneffe, Noelle Grosjean, Kyrian Nicolay-Kritter, Evan Harrell, Ashley Kees, Christophe Mulle

FENS Forum 2024

ePoster

Revealing hidden targets in memory assemblies: The minimal engram for contextual memory encoding

Raquel Garcia Hernandez, Luis Álvarez-García, Alejandro Trouvé-Carpena, Hernan A. Makse, Santiago Canals

FENS Forum 2024