Retention
retention
Dissociating learning-induced effects of meaning and familiarity in visual working memory for Chinese characters
Visual working memory (VWM) is limited in capacity, but memorizing meaningful objects may refine this limitation. However, meaningless and meaningful stimuli usually differ perceptually and an object’s association with meaning is typically already established before the actual experiment. We applied a strict control over these potential confounds by asking observers (N=45) to actively learn associations of (initially) meaningless objects. To this end, a change detection task presented Chinese characters, which were meaningless to our observers. Subsequently, half of the characters were consistently paired with pictures of animals. Then, the initial change detection task was repeated. The results revealed enhanced VWM performance after learning, in particular for meaning-associated characters (though not quite reaching the accuracy level attained by N=20 native Chinese observers). These results thus provide direct experimental evidence that the short-term retention of objects benefits from active learning of an object’s association with meaning in long-term memory.
Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity
Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.
Human memory: mathematical models and experiments
I will present my recent work on mathematical modeling of human memory. I will argue that memory recall of random lists of items is governed by the universal algorithm resulting in the analytical relation between the number of items in memory and the number of items that can be successfully recalled. The retention of items in memory on the other hand is not universal and differs for different types of items being remembered, in particular retention curves for words and sketches is different even when sketches are made to only carry information about an object being drawn. I will discuss the putative reasons for these observations and introduce the phenomenological model predicting retention curves.
What are the consequences of directing attention within working memory?
The role of attention in working memory remains controversial, but there is some agreement on the notion that the focus of attention holds mnemonic representations in a privileged state of heightened accessibility in working memory, resulting in better memory performance for items that receive focused attention during retention. Closely related, representations held in the focus of attention are often observed to be robust and protected from degradation caused by either perceptual interference (e.g., Makovski & Jiang, 2007; van Moorselaar et al., 2015) or decay (e.g., Barrouillet et al., 2007). Recent findings indicate, however, that representations held in the focus of attention are particularly vulnerable to degradation, and thus, appear to be particularly fragile rather than robust (e.g., Hitch et al., 2018; Hu et al., 2014). The present set of experiments aims at understanding the apparent paradox of information in the focus of attention having a protected vs. vulnerable status in working memory. To that end, we examined the effect of perceptual interference on memory performance for information that was held within vs. outside the focus of attention, across different ways of bringing items in the focus of attention and across different time scales.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Making memories in mice
Understanding how the brain uses information is a fundamental goal of neuroscience. Several human disorders (ranging from autism spectrum disorder to PTSD to Alzheimer’s disease) may stem from disrupted information processing. Therefore, this basic knowledge is not only critical for understanding normal brain function, but also vital for the development of new treatment strategies for these disorders. Memory may be defined as the retention over time of internal representations gained through experience, and the capacity to reconstruct these representations at later times. Long-lasting physical brain changes (‘engrams’) are thought to encode these internal representations. The concept of a physical memory trace likely originated in ancient Greece, although it wasn’t until 1904 that Richard Semon first coined the term ‘engram’. Despite its long history, finding a specific engram has been challenging, likely because an engram is encoded at multiple levels (epigenetic, synaptic, cell assembly). My lab is interested in understanding how specific neurons are recruited or allocated to an engram, and how neuronal membership in an engram may change over time or with new experience. Here I will describe both older and new unpublished data in our efforts to understand memories in mice.
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
Neuronal morphology imposes a tradeoff between stability, accuracy and efficiency of synaptic scaling
Synaptic scaling is a homeostatic normalization mechanism that preserves relative synaptic strengths by adjusting them with a common factor. This multiplicative change is believed to be critical, since synaptic strengths are involved in learning and memory retention. Further, this homeostatic process is thought to be crucial for neuronal stability, playing a stabilizing role in otherwise runaway Hebbian plasticity [1-3]. Synaptic scaling requires a mechanism to sense total neuron activity and globally adjust synapses to achieve some activity set-point [4]. This process is relatively slow, which places limits on its ability to stabilize network activity [5]. Here we show that this slow response is inevitable in realistic neuronal morphologies. Furthermore, we reveal that global scaling can in fact be a source of instability unless responsiveness or scaling accuracy are sacrificed." "A neuron with tens of thousands of synapses must regulate its own excitability to compensate for changes in input. The time requirement for global feedback can introduce critical phase lags in a neuron’s response to perturbation. The severity of phase lag increases with neuron size. Further, a more expansive morphology worsens cell responsiveness and scaling accuracy, especially in distal regions of the neuron. Local pools of reserve receptors improve efficiency, potentiation, and scaling, but this comes at a cost. Trafficking large quantities of receptors requires time, exacerbating the phase lag and instability. Local homeostatic feedback mitigates instability, but this too comes at the cost of reducing scaling accuracy." "Realization of the phase lag instability requires a unified model of synaptic scaling, regulation, and transport. We present such a model with global and local feedback in realistic neuron morphologies (Fig. 1). This combined model shows that neurons face a tradeoff between stability, accuracy, and efficiency. Global feedback is required for synaptic scaling but favors either system stability or efficiency. Large receptor pools improve scaling accuracy in large morphologies but worsen both stability and efficiency. Local feedback improves the stability-efficiency tradeoff at the cost of scaling accuracy. This project introduces unexplored constraints on neuron size, morphology, and synaptic scaling that are weakened by an interplay between global and local feedback.
Cortico-cortical feedback to visual areas can explain reactivation of latent memories during working memory retention
Bernstein Conference 2024
Auditory cortex activity during sound memory retention in an auditory working memory task
FENS Forum 2024
Rabphilin 3A: From NMDA receptor synaptic retention to neurodevelopmental disorders
FENS Forum 2024
Retrieval inhibition during sensorimotor consolidation modulates memory retention
FENS Forum 2024