Working Memory
working memory
Prof. Edmund Wascher / Dr. Laura-Isabelle Klatt
We are seeking to fill a fully funded PhD position (75% TV-L 13 state employees salary scheme) in cognitive neuroscience. The successful applicant will contribute to a project, investigating selective attention and working memory processes in a multisensory context. In particular, we are interested in how the auditory and the visual system interact during the deployment of attention in multisensory environments and how audio-visual information is integrated. To answer those research questions, we primarily use the EEG in combination with cutting edge analysis methods (e.g., multivariate pattern classification). Beyond that, the application of eye-tracking or (functional) MRI is possible within the project. Your responsibilities will include conducting (EEG-) experiments, data analysis, preparation of manuscripts for publication in peer-reviewed journals, as well as presentation of scientific results at (inter-)national conferences. Official job ad: https://www.ifado.de/ifadoen/careers/current-job-offers/#job3
Dr. Gaen Plancher
The postdoctorat is part of a project funded by the French National Research Agency (ANR). The objective of this proposal is to examine the cognitive and neuronal mechanisms of information storage in memory from the very beginning, when information is present in working memory, until the late stage of sleep-dependent long-term consolidation of this information. One feature of the project is to investigate these mechanisms in humans and in animals (rats), the animal model offering a more direct measurement of cognitive and neuronal mechanisms of memory. The project brings together specialists in neurocognitive mechanisms of memory in humans and specialists in neuronal mechanisms of memory in rats. The project of the postdoctorat per se is focused on humans. It is well acknowledged that the content of working memory is erased and reset after a short time, to prevent irrelevant information from proactively interfering with newly stored information. Gaël Malleret, Paul Salin and their colleagues (2017) recently explored these interference phenomena in rats. Surprisingly, they observed that under certain conditions (task with a high level of proactive interference), these interferences could be consolidated inlong-term-memory. A 24 hour-gap, involving sleep, known to allow consolidation processes to unfold, was a necessary and sufficient condition for the long-term proactive interference effect to occur. The objective of the postdoctorat is to better understand the impact of these interference phenomena in memory of humans. Behavioral and neuronal (EEG) data will be collected at various delays: at immediate, delayed and after an interval of sleep.
Dr. Jennifer Luebke
We are seeking a highly motivated and detail-oriented postdoctoral fellow with a track record in research applying magnetic resonance imaging and image processing. The central goal of the binational (Boston and Barcelona) CRCNS project is to advance our understanding of the computational and neural mechanisms underlying working memory as well as age-related changes to executive function. The successful candidate will lead analyses of structural MRI (T1 TFE scans), resting-state fMRI, and diffusion spectrum imaging (DSI) brain scans with an overall focus on brain morphometry and connectivity. Data generated from the MRI scans will be correlated with behavioral, physiological, and other anatomical data across the adult life span. This work will be part of a highly interdisciplinary project (psychophysics, imaging, anatomy, electrophysiological experiments, innovative data analysis and computational modeling) and includes the possibility for extended research stays at the Center for Mathematical Research in Barcelona, Spain (Wimmer lab). The appointment as a Boston University School of Medicine postdoctoral fellow (supervised by Jennifer Luebke and Ronald J. Killiany) will be for at least 1 year with the possibility for extension based on performance.
OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis
In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope
Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.
Executive functions in the brain of deaf individuals – sensory and language effects
Executive functions are cognitive processes that allow us to plan, monitor and execute our goals. Using fMRI, we investigated how early deafness influences crossmodal plasticity and the organisation of executive functions in the adult human brain. Results from a range of visual executive function tasks (working memory, task switching, planning, inhibition) show that deaf individuals specifically recruit superior temporal “auditory” regions during task switching. Neural activity in auditory regions predicts behavioural performance during task switching in deaf individuals, highlighting the functional relevance of the observed cortical reorganisation. Furthermore, language grammatical skills were correlated with the level of activation and functional connectivity of fronto-parietal networks. Together, these findings show the interplay between sensory and language experience in the organisation of executive processing in the brain.
Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory
Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Anticipating behaviour through working memory (BACN Early Career Prize Lecture 2023)
Working memory is about the past but for the future. Adopting such a future-focused perspective shifts the narrative of working memory as a limited-capacity storage system to working memory as an anticipatory buffer that helps us prepare for potential and sequential upcoming behaviour. In my talk, I will present a series of our recent studies that have started to reveal emerging principles of a working memory that looks forward – highlighting, amongst others, how selective attention plays a vital role in prioritising internal contents for behaviour, and the bi-directional links between visual working memory and action. These studies show how studying the dynamics of working memory, selective attention, and action together paves way for an integrated understanding of how mind serves behaviour.
Sleep deprivation and the human brain: from brain physiology to cognition”
Sleep strongly affects synaptic strength, making it critical for cognition, especially learning and memory formation. Whether and how sleep deprivation modulates human brain physiology and cognition is poorly understood. Here we examined how overnight sleep deprivation vs overnight sufficient sleep affects (a) cortical excitability, measured by transcranial magnetic stimulation, (b) inducibility of long-term potentiation (LTP)- and long-term depression (LTD)-like plasticity via transcranial direct current stimulation (tDCS), and (c) learning, memory, and attention. We found that sleep deprivation increases cortical excitability due to enhanced glutamate-related cortical facilitation and decreases and/or reverses GABAergic cortical inhibition. Furthermore, tDCS-induced LTP-like plasticity (anodal) abolishes while the inhibitory LTD-like plasticity (cathodal) converts to excitatory LTP-like plasticity under sleep deprivation. This is associated with increased EEG theta oscillations due to sleep pressure. Motor learning, behavioral counterparts of plasticity, and working memory and attention, which rely on cortical excitability, are also impaired during sleep deprivation. Our study indicates that upscaled brain excitability and altered plasticity, due to sleep deprivation, are associated with impaired cognitive performance. Besides showing how brain physiology and cognition undergo changes (from neurophysiology to higher-order cognition) under sleep pressure, the findings have implications for variability and optimal application of noninvasive brain stimulation.
Distinct contributions of different anterior frontal regions to rule-guided decision-making in primates: complementary evidence from lesions, electrophysiology, and neurostimulation
Different prefrontal areas contribute in distinctly different ways to rule-guided behaviour in the context of a Wisconsin Card Sorting Test (WCST) analog for macaques. For example, causal evidence from circumscribed lesions in NHPs reveals that dorsolateral prefrontal cortex (dlPFC) is necessary to maintain a reinforced abstract rule in working memory, orbitofrontal cortex (OFC) is needed to rapidly update representations of rule value, and the anterior cingulate cortex (ACC) plays a key role in cognitive control and integrating information for correct and incorrect trials over recent outcomes. Moreover, recent lesion studies of frontopolar cortex (FPC) suggest it contributes to representing the relative value of unchosen alternatives, including rules. Yet we do not understand how these functional specializations relate to intrinsic neuronal activities nor the extent to which these neuronal activities differ between different prefrontal regions. After reviewing the aforementioned causal evidence I will present our new data from studies using multi-area multi-electrode recording techniques in NHPs to simultaneously record from four different prefrontal regions implicated in rule-guided behaviour. Multi-electrode micro-arrays (‘Utah arrays’) were chronically implanted in dlPFC, vlPFC, OFC, and FPC of two macaques, allowing us to simultaneously record single and multiunit activity, and local field potential (LFP), from all regions while the monkey performs the WCST analog. Rule-related neuronal activity was widespread in all areas recorded but it differed in degree and in timing between different areas. I will also present preliminary results from decoding analyses applied to rule-related neuronal activities both from individual clusters and also from population measures. These results confirm and help quantify dynamic task-related activities that differ between prefrontal regions. We also found task-related modulation of LFPs within beta and gamma bands in FPC. By combining this correlational recording methods with trial-specific causal interventions (electrical microstimulation) to FPC we could significantly enhance and impair animals performance in distinct task epochs in functionally relevant ways, further consistent with an emerging picture of regional functional specialization within a distributed framework of interacting and interconnected cortical regions.
Dissociating learning-induced effects of meaning and familiarity in visual working memory for Chinese characters
Visual working memory (VWM) is limited in capacity, but memorizing meaningful objects may refine this limitation. However, meaningless and meaningful stimuli usually differ perceptually and an object’s association with meaning is typically already established before the actual experiment. We applied a strict control over these potential confounds by asking observers (N=45) to actively learn associations of (initially) meaningless objects. To this end, a change detection task presented Chinese characters, which were meaningless to our observers. Subsequently, half of the characters were consistently paired with pictures of animals. Then, the initial change detection task was repeated. The results revealed enhanced VWM performance after learning, in particular for meaning-associated characters (though not quite reaching the accuracy level attained by N=20 native Chinese observers). These results thus provide direct experimental evidence that the short-term retention of objects benefits from active learning of an object’s association with meaning in long-term memory.
Effect of Different Influences on Temporal Error Monitoring
Metacognition has long been defined as “cognition about cognition”. One of its aspects is the error monitoring ability, which includes being aware of one’s own errors without external feedback. This ability is mostly investigated in two-alternative forced choice tasks, where the performance has all or none nature in terms of accuracy. The previous literature documents the effect of different influences on the error monitoring ability, such as working memory, feedback and sensorimotor involvement. However, these demonstrations fall short of generalizing to the real life scenarios where the errors often have a magnitude and a direction. To bridge this gap, recent studies showed that humans could keep track of the magnitude and the direction of their errors in temporal, spatial and numerical domains in two metrics: confidence and short-long/few-more judgements. This talk will cover how the documented effects that are obtained in the two alternative forced choices tasks apply to the temporal error monitoring ability. Finally, how magnitude and direction monitoring (i.e., confidence and short-long judgements) can be differentiated as the two indices of temporal error monitoring ability will be discussed.
Working memory tasks for functional mapping of the prefrontal cortex in common marmosets
Hippocampal network dynamics during impaired working memory in epileptic mice
Memory impairment is a common cognitive deficit in temporal lobe epilepsy (TLE). The hippocampus is severely altered in TLE exhibiting multiple anatomical changes that lead to a hyperexcitable network capable of generating frequent epileptic discharges and seizures. In this study we investigated whether hippocampal involvement in epileptic activity drives working memory deficits using bilateral LFP recordings from CA1 during task performance. We discovered that epileptic mice experienced focal rhythmic discharges (FRDs) while they performed the spatial working memory task. Spatial correlation analysis revealed that FRDs were often spatially stable on the maze and were most common around reward zones (25 ‰) and delay zones (50 ‰). Memory performance was correlated with stability of FRDs, suggesting that spatially unstable FRDs interfere with working memory codes in real time.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Neural Dynamics of Cognitive Control
Cognitive control guides behavior by controlling what, where, and how information is represented in the brain. Perhaps the most well-studied form of cognitive control has been ‘attention’, which controls how external sensory stimuli are represented in the brain. In contrast, the neural mechanisms controlling the selection of representations held ‘in mind’, in working memory, are unknown. In this talk, I will present evidence that the prefrontal cortex controls working memory by selectively enhancing and transforming the contents of working memory. In particular, I will show how the neural representation of the content of working memory changes over time, moving between different ‘subspaces’ of the neural population. These dynamics may play a critical role in controlling what and how neural representations are acted upon.
Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Online Training of Spiking Recurrent Neural Networks With Memristive Synapses
Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
Optimal information loading into working memory in prefrontal cortex
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.
Chemistry of the adaptive mind: lessons from dopamine
The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.
Neural circuits of visuospatial working memory
One elementary brain function that underlies many of our cognitive behaviors is the ability to maintain parametric information briefly in mind, in the time scale of seconds, to span delays between sensory information and actions. This component of working memory is fragile and quickly degrades with delay length. Under the assumption that behavioral delay-dependencies mark core functions of the working memory system, our goal is to find a neural circuit model that represents their neural mechanisms and apply it to research on working memory deficits in neuropsychiatric disorders. We have constrained computational models of spatial working memory with delay-dependent behavioral effects and with neural recordings in the prefrontal cortex during visuospatial working memory. I will show that a simple bump attractor model with weak inhomogeneities and short-term plasticity mechanisms can link neural data with fine-grained behavioral output in a trial-by-trial basis and account for the main delay-dependent limitations of working memory: precision, cardinal repulsion biases and serial dependence. I will finally present data from participants with neuropsychiatric disorders that suggest that serial dependence in working memory is specifically altered, and I will use the model to infer the possible neural mechanisms affected.
Geometry of sequence working memory in macaque prefrontal cortex
How the brain stores a sequence in memory remains largely unknown. We investigated the neural code underlying sequence working memory using two-photon calcium imaging to record thousands of neurons in the prefrontal cortex of macaque monkeys memorizing and then reproducing a sequence of locations after a delay. We discovered a regular geometrical organization: The high-dimensional neural state space during the delay could be decomposed into a sum of low-dimensional subspaces, each storing the spatial location at a given ordinal rank, which could be generalized to novel sequences and explain monkey behavior. The rank subspaces were distributed across large overlapping neural groups, and the integration of ordinal and spatial information occurred at the collective level rather than within single neurons. Thus, a simple representational geometry underlies sequence working memory.
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
Computational Models of Fine-Detail and Categorical Information in Visual Working Memory: Unified or Separable Representations?
When we remember a stimulus we rarely maintain a full fidelity representation of the observed item. Our working memory instead maintains a mixture of the observed feature values and categorical/gist information. I will discuss evidence from computational models supporting a mix of categorical and fine-detail information in working memory. Having established the need for two memory formats in working memory, I will discuss whether categorical and fine-detailed information for a stimulus are represented separately or as a single unified representation. Computational models of these two potential cognitive structures make differing predictions about the pattern of responses in visual working memory recall tests. The present study required participants to remember the orientation of stimuli for later reproduction. The pattern of responses are used to test the competing representational structures and to quantify the relative amount of fine-detailed and categorical information maintained. The effects of set size, encoding time, serial order, and response order on memory precision, categorical information, and guessing rates are also explored. (This is a 60 min talk).
Cognition is Rhythm
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (>35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.
Interactions between visual cortical neurons that give rise to conscious perception
I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory and can be reported. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual brain. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.
The diachronic account of attentional selectivity
Many models of attention assume that attentional selection takes place at a specific moment in time which demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing account is not only inaccurate, but has led to substantial conceptual confusion (to the point where some attention researchers offer to abandon the term ‘attention’ altogether). As an alternative, we offer a “diachronic” framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We present data showing that multiple sequential events can be involuntarily encoded in working memory when they appear during the same attentional episode, whether they are relevant or not. We also discuss the costs associated with processing multiple events within a single episode. Finally, we argue that breaking down the dichotomy between pre-attentive and attentive (as well as early vs. late selection) offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as “attention”.
What are the consequences of directing attention within working memory?
The role of attention in working memory remains controversial, but there is some agreement on the notion that the focus of attention holds mnemonic representations in a privileged state of heightened accessibility in working memory, resulting in better memory performance for items that receive focused attention during retention. Closely related, representations held in the focus of attention are often observed to be robust and protected from degradation caused by either perceptual interference (e.g., Makovski & Jiang, 2007; van Moorselaar et al., 2015) or decay (e.g., Barrouillet et al., 2007). Recent findings indicate, however, that representations held in the focus of attention are particularly vulnerable to degradation, and thus, appear to be particularly fragile rather than robust (e.g., Hitch et al., 2018; Hu et al., 2014). The present set of experiments aims at understanding the apparent paradox of information in the focus of attention having a protected vs. vulnerable status in working memory. To that end, we examined the effect of perceptual interference on memory performance for information that was held within vs. outside the focus of attention, across different ways of bringing items in the focus of attention and across different time scales.
Psychological essentialism in working memory research
Psychological essentialism is ubiquitous. It is one of primary bases of thoughts and behaviours throughout our entire lifetime. Human's such characteristics that find an unseen hidden entity behind observable phenomena or exemplars, however, lead us to somehow biased thinking and reasoning even in the realm of science, including psychology. For example, a latent variable extracted from various measurements is just a statistical property calculated in structural equation modeling, therefore, is not necessary to be a fundamental reality. Yet, we occasionally feel that there is the essential nature of such a psychological construct a priori. This talk will demonstrate examples of psychological essentialism in psychology and examine its resultant influences on working memory related issues, e. g., working memory training. Such demonstration, examination, and subsequent discussions on these topics will provide us an opportunity to reconsider the concept of working memory.
Removing information from working memory
Holding information in working memory is essential for cognition, but removing unwanted thoughts is equally important. There is great flexibility in how we can manipulate information in working memory, but the processes and consequences of these operations are poorly understood. In this talk I will discuss our recent findings using multivariate pattern analyses of fMRI brain data to demonstrate the successful removal of information from working memory using three different strategies: suppressing a specific thought, replacing a thought with a different one, and clearing the mind of all thought. These strategies are supported by distinct brain regions and have differential consequences on the encoding of new information. I will discuss implications of these results on theories of memory and I will highlight some new directions involving the use of real-time neurofeedback to investigate causal links between brain and behavior.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Differential working memory functioning
The integrated conflict monitoring theory of Botvinick introduced cognitive demand into conflict monitoring research. We investigated effects of individual differences of cognitive demand and another determinant of conflict monitoring entitled reinforcement sensitivity on conflict monitoring. We showed evidence of differential variability of conflict monitoring intensity using the electroencephalogram (EEG), functional magnet resonance imaging (fMRI) and behavioral data. Our data suggest that individual differences of anxiety and reasoning ability are differentially related to the recruitment of proactive and reactive cognitive control (cf. Braver). Based on previous findings, the team of the Leue-Lab investigated new psychometric data on conflict monitoring and proactive-reactive cognitive control. Moreover, data of the Leue-Lab suggest the relevance of individual differences of conflict monitoring for the context of deception. In this respect, we plan new studies highlighting individual differences of the functioning of the Anterior Cingulate Cortex (ACC). Disentangling the role of individual differences in working memory-related cognitive demand, mental effort, and reinforcement-related processes opens new insights for cognitive-motivational approaches of information processing (Passcode to rewatch: 0R8v&m59).
The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI
In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.
Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations
Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.
Multi-scale synaptic analysis for psychiatric/emotional disorders
Dysregulation of emotional processing and its integration with cognitive functions are central features of many mental/emotional disorders associated both with externalizing problems (aggressive, antisocial behaviors) and internalizing problems (anxiety, depression). As Dr. Joseph LeDoux, our invited speaker of this program, wrote in his famous book “Synaptic self: How Our Brains Become Who We Are”—the brain’s synapses—are the channels through which we think, act, imagine, feel, and remember. Synapses encode the essence of personality, enabling each of us to function as a distinctive, integrated individual from moment to moment. Thus, exploring the functioning of synapses leads to the understanding of the mechanism of (patho)physiological function of our brain. In this context, we have investigated the pathophysiology of psychiatric disorders, with particular emphasis on the synaptic function of model mice of various psychiatric disorders such as schizophrenia, autism, depression, and PTSD. Our current interest is how synaptic inputs are integrated to generate the action potential. Because the spatiotemporal organization of neuronal firing is crucial for information processing, but how thousands of inputs to the dendritic spines drive the firing remains a central question in neuroscience. We identified a distinct pattern of synaptic integration in the disease-related models, in which extra-large (XL) spines generate NMDA spikes within these spines, which was sufficient to drive neuronal firing. We experimentally and theoretically observed that XL spines negatively correlated with working memory. Our work offers a whole new concept for dendritic computation and network dynamics, and the understanding of psychiatric research will be greatly reconsidered. The second half of my talk is the development of a novel synaptic tool. Because, no matter how beautifully we can illuminate the spine morphology and how accurately we can quantify the synaptic integration, the links between synapse and brain function remain correlational. In order to challenge the causal relationship between synapse and brain function, we established AS-PaRac1, which is unique not only because it can specifically label and manipulate the recently potentiated dendritic spine (Hayashi-Takagi et al, 2015, Nature). With use of AS-PaRac1, we developed an activity-dependent simultaneous labeling of the presynaptic bouton and the potentiated spines to establish “functional connectomics” in a synaptic resolution. When we apply this new imaging method for PTSD model mice, we identified a completely new functional neural circuit of brain region A→B→C with a very strong S/N in the PTSD model mice. This novel tool of “functional connectomics” and its photo-manipulation could open up new areas of emotional/psychiatric research, and by extension, shed light on the neural networks that determine who we are.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Visual working memory representations are distorted by their use in perceptual comparisons
Visual working memory (VWM) allows us to maintain a small amount of task-relevant information in mind so that we can use them to guide our behavior. Although past studies have successfully characterized its capacity limit and representational quality during maintenance, the consequence of its usage for task-relevant behaviors has been largely unknown. In this talk, I will demonstrate that VWM representations get distorted when they are used for perceptual comparisons with new visual inputs, especially when the inputs are subjectively similar to the VWM representations. Furthermore, I will show that this similarity-induced memory bias (SIMB) occurs for both simple (e.g. , color, shape) and complex stimuli (e.g., real world objects, faces) that are perceptually encoded and retrieved from long-term memory. Given the observed versatility of the SIMB, its implication for other memory distortion phenomena (e.g., distractor-induced distortion, misinformation effect) will be discussed.
Perception, attention, visual working memory, and decision making: The complete consort dancing together
Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.
Deciding to stop deciding: A cortical-subcortical circuit for forming and terminating a decision
The neurobiology of decision-making is informed by neurons capable of representing information over time scales of seconds. Such neurons were initially characterized in studies of spatial working memory, motor planning (e.g., Richard Andersen lab) and spatial attention. For decision-making, such neurons emit graded spike rates, that represent the accumulated evidence for or against a choice. They establish the conduit between the formation of the decision and its completion, usually in the form of a commitment to an action, even if provisional. Indeed, many decisions appear to arise through an accumulation of noisy samples of evidence to a terminating threshold, or bound. Previous studies show that single neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence when monkeys make decisions about the direction of random dot motion (RDM) and express their decision with a saccade to the neuron’s preferred target. The mechanism of termination (the bound) is elusive. LIP is interconnected with other brain regions that also display decision-related activity. Whether these areas play roles in the decision process that are similar to or fundamentally different from that of LIP is unclear. I will present new unpublished experiments that begin to resolve these issues by recording from populations of neurons simultaneously in LIP and one of its primary targets, the superior colliculus (SC), while monkeys make difficult perceptual decisions.
Neural correlates of cognitive control across the adult lifespan
Cognitive control involves the flexible allocation of mental resources during goal-directed behaviour and comprises three correlated but distinct domains—inhibition, task shifting, and working memory. Healthy ageing is characterised by reduced cognitive control. Professor Cheryl Grady and her team have been studying the influence of age differences in large-scale brain networks on the three control processes in a sample of adults from 20 to 86 years of age. In this webinar, Professor Cheryl Grady will describe three aspects of this work: 1) age-related dedifferentiation and reconfiguration of brain networks across the sub-domains 2) individual differences in the relation of task-related activity to age, structural integrity and task performance for each sub-domain 3) modulation of brain signal variability as a function of cognitive load and age during working memory. This research highlights the reduction in dynamic range of network activity that occurs with ageing and how this contributes to age differences in cognitive control. Cheryl Grady is a senior scientist at the Rotman Research Institute at Baycrest, and Professor in the departments of Psychiatry and Psychology at the University of Toronto. She held the Canada Research Chair in Neurocognitive Aging from 2005-2018 and was elected as a Fellow of the Royal Society of Canada in 2019. Her research uses MRI to determine the role of brain network connectivity in cognitive ageing.
Networks for multi-sensory attention and working memory
Converging evidence from fMRI and EEG shows that audtiory spatial attention engages the same fronto-parietal network associated with visuo-spatial attention. This network is distinct from an auditory-biased processing network that includes other frontal regions; this second network is can be recruited when observers extract rhythmic information from visual inputs. We recently used a dual-task paradigm to examine whether this "division of labor" between a visuo-spatial network and an auditory-rhythmic network can be observed in a working memory paradigm. We varied the sensory modality (visual vs. auditory) and information domain (spatial or rhythmic) that observers had to store in working memory, while also performing an intervening task. Behavior, pupilometry, and EEG results show a complex interaction across the working memory and intervening tasks, consistent with two cognitive control networks managing auditory and visual inputs based on the kind of information being processed.
Analogical reasoning and metaphor processing in autism - Similarities & differences
In this talk, I will present the results of two recent systematic reviews and meta-analyses related to analogical reasoning and metaphor processing in autism, together with the results of a study that investigated verbal analogical reasoning and metaphor processing in the same sample of participants. Both metaphors and analogies rely on exploiting similarities, and they necessitate contextual processing. Nevertheless, our findings relating to metaphor processing and analogical reasoning showed distinct patterns. Whereas analogical reasoning emerged as a relative strength in autism, metaphor processing was found to be a relative weakness. Additionally, both meta-analytic studies investigated the relations between the level of intelligence of participants included in the studies, and the effect size of group differences between the autistic and typically developing (TD) samples. These analyses suggested in the case of analogical reasoning that the relative advantage of ASD participants might only be present in the case of individuals with lower levels of intelligence. By contrast, impairments in metaphor processing appeared to be more pronounced in the case of individuals with relatively lower levels of (verbal) intelligence. In our experimental study, we administered both verbal analogies and metaphors to the same sample of high-functioning autistic participants and TD controls. The two groups were matched on age, verbal IQ, working memory and educational background. Our aim was to understand better the similarities and differences between processing analogies and metaphors, and to see whether the advantage in analogical reasoning and disadvantage in metaphor processing is universal in autism.
Recurrent network dynamics lead to interference in sequential learning
Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.
Applications of Multisensory Facilitation of Learning
In this talk I’ll discuss translation of findings of multisensory facilitation of learning to cognitive training. I’ll first review some early findings of multisensory facilitation of learning and then discuss how we have been translating these basic science approaches into gamified training interventions to improve cognitive functions. I’ll touch on approaches to training vision, hearing and working memory that we are developing at the UCR Brain Game Center for Mental Fitness and Well-being. I look forward to discussing both the basic science but also the complexities of how to translate approaches from basic science into the more complex frameworks often used in interventions.
Human Single-Neuron recordings reveal neuronal mechanisms of Working Memory
Working memory (WM) is a fundamental human cognitive capacity that allows us to maintain and manipulate information stored for a short period of time in an active form. Thanks to a unique opportunity to record activity of neurons in humans during epilepsy monitoring we could test neuronal mechanisms of this cognitive capacity. We showed that firing rate of image selective neurons in Medial Temporal Lobe persists through maintenance periods of working memory task. This activity was behaviorally relevant and formed attractors in its state-space. Furthermore, we showed that firing rate of those neurons phase lock to ongoing slow-frequency oscillations. The properties of phase locking are dependent on memory content and load. During high memory loads, the phase of the oscillatory activity to which neurons phase lock provides information about memory content not available in the firing rate of the neurons.
Interactions between neurons during visual perception and restoring them in blindness
I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results also inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual cortex. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.
Study of sensory "prior distributions" in rodent models of working memory and perceptual decision making
Unique Molecular Regulation of Prefrontal Cortex Confers Vulnerability to Cognitive Disorders
The Arnsten lab studies molecular influences on the higher cognitive circuits of the dorsolateral prefrontal cortex (dlPFC), in order to understand mechanisms affecting working memory at the cellular and behavioral levels, with the overarching aim of identifying the actions that render the dlPFC so vulnerable in cognitive disorders. Her lab has shown that the dlPFC has unique neurotransmission and neuromodulation compared to the classic actions found in the primary visual cortex, including mechanisms to rapidly weaken PFC connections during uncontrollable stress. Reduced regulation of these stress pathways due to genetic or environmental insults contributes to dlPFC dysfunction in cognitive disorders, including calcium dysregulation and tau phosphorylation in the aging association cortex. Understanding these unique mechanisms has led to the development of a new treatment, IntunivTM, for a variety of PFC disorders.
Using Developmental Trajectories to Understand Change in Children’s Analogical Reasoning
Analogical reasoning is a complex ‘high-level’ cognitive process characterised by making inferences based on analogical comparisons. As with other high-level processes, development takes place over a protracted time period and believed to result from changes in multiple ‘lower-level’ systems. In the case of analogical reasoning, changes in systems responsible for conceptual knowledge, task knowledge, inhibition, and working memory have all been causally implicated in development. Whilst there is evidence that each of these systems contributes to development, what the relative contribution of each across development is, and how they interact with each, remain largely unanswered questions. In this presentation, I will describe how cross-sectional trajectory analysis can be used as a complementary method to shed light on these questions.
Working memory transforms goals into rewards
Humans continuously need to learn to make good choices – be it using a new video-conferencing set up, figuring out what questions to ask to successfully secure a reliable babysitter, or just selecting which location in a house is least likely to be interrupted by toddlers during work calls. However, the goals we seek to attain – such as using zoom successfully – are often vaguely defined and previously unexperienced, and in that sense cannot be known by us as being rewarding. We hypothesized that learning to make good choices in such situations nevertheless leverages reinforcement learning processes, and that executive functions in general, and working memory in particular, play a crucial role in defining the reward function for arbitrary outcomes in such a way that they become reinforcing. I will show results from a novel behavioral protocol, as well as preliminary computational and imaging evidence supporting our hypothesis.
The consequences and constraints of functional organization on behavior
In many ways, cognitive neuroscience is the attempt to use physiological observation to clarify the mechanisms that shape behavior. Over the past 25 years, fMRI has provided a system-wide and yet somewhat spatially precise view of the response in human cortex evoked by a wide variety of stimuli and task contexts. The current talk focuses on the other direction of inference; the implications of this observed functional organization for behavior. To begin, we must interrogate the methodological and empirical frameworks underlying our derivation of this organization, partially by exploring its relationship to and predictability from gross neuroanatomy. Next, across a series of studies, the implications of two properties of functional organization for behavior will be explored: 1) the co-localization of visual working memory and perceptual processing and 2) implicit learning in the context of distributed responses. In sum, these results highlight the limitations of our current approach and hint at a new general mechanism for explaining observed behavior in context with the neural substrate.
Geometry of Neural Computation Unifies Working Memory and Planning
Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.
Working Memory 2.0
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
Computational Models of Large-Scale Brain Networks - Dynamics & Function
Theoretical and computational models of neural systems have been traditionally focused on small neural circuits, given the lack of reliable data on large-scale brain structures. The situation has started to change in recent years, with novel recording technologies and large organized efforts to describe the brain at a larger scale. In this talk, Professor Mejias from the University of Amsterdam will review his recent work on developing anatomically constrained computational models of large-scale cortical networks of monkeys, and how this approach can help to answer important questions in large-scale neuroscience. He will focus on three main aspects: (i) the emergence of functional interactions in different frequency regimes, (ii) the role of balance for efficient large-scale communication, and (iii) new paradigms of brain function, such as working memory, in large-scale networks.
Inferring Brain Rhythm Circuitry and Burstiness
Bursts in gamma and other frequency ranges are thought to contribute to the efficiency of working memory or communication tasks. Abnormalities in bursts have also been associated with motor and psychiatric disorders. The determinants of burst generation are not known, specifically how single cell and connectivity parameters influence burst statistics and the corresponding brain states. We first present a generic mathematical model for burst generation in an excitatory-inhibitory (EI) network with self-couplings. The resulting equations for the stochastic phase and envelope of the rhythm’s fluctuations are shown to depend on only two meta-parameters that combine all the network parameters. They allow us to identify different regimes of amplitude excursions, and to highlight the supportive role that network finite-size effects and noisy inputs to the EI network can have. We discuss how burst attributes, such as their durations and peak frequency content, depend on the network parameters. In practice, the problem above follows the a priori challenge of fitting such E-I spiking networks to single neuron or population data. Thus, the second part of the talk will discuss a novel method to fit mesoscale dynamics using single neuron data along with a low-dimensional, and hence statistically tractable, single neuron model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous ‘pools’ of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using an E-I network of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived. We show that both single-neuron and connectivity parameters can be adequately recovered from simulated data.
Cortico-cortical feedback to visual areas can explain reactivation of latent memories during working memory retention
Bernstein Conference 2024
Decision making: describing the dynamics of working memory
Bernstein Conference 2024
Excitatory and inhibitory neurons exhibit distinct roles for task learning, temporal scaling, and working memory in recurrent spiking neural network models of neocortex.
Bernstein Conference 2024
Slow Manifold Dynamics for Working Memory are near Continuous Attractors
Bernstein Conference 2024
Synergistic short-term synaptic plasticity mechanisms for working memory
Bernstein Conference 2024
Dissecting emergent network noise compensation mechanisms in working memory tasks
COSYNE 2022
You don’t always forget: Mechanisms underlying working memory lapses.
COSYNE 2022
Dynamic organization of global cell assembly for working memory
COSYNE 2022
Dynamics of interhemispheric prefrontal coordination underlying serial dependence in working memory
COSYNE 2022
The neurocognitive role of working memory load when motivation affects instrumental learning
COSYNE 2022
The neurocognitive role of working memory load when motivation affects instrumental learning
COSYNE 2022
Neuronal implementation of the representational geometry in prefrontal working memory
COSYNE 2022
Neuronal implementation of the representational geometry in prefrontal working memory
COSYNE 2022
Selection from working memory can lead to catastrophic misbinding errors
COSYNE 2022
Selection from working memory can lead to catastrophic misbinding errors
COSYNE 2022
An attractor model explains space-specific distractor biases in visual working memory
COSYNE 2023
Brain-wide Hierarchical Encoding of Working Memory
COSYNE 2023
Causal role of the visual cortex in working memory and perceptual bias
COSYNE 2023
Connectome-constrained cortical circuits optimized for visual function and working memory tasks
COSYNE 2023
Drift dynamics interact with a confirmation bias in visual working memory
COSYNE 2023
From recency to central tendency biases in working memory: a unifying network model
COSYNE 2023
Human-like capacity limits in working memory models result from naturalistic sensory constraints
COSYNE 2023
Learning orthogonal working memory representations protects from interference in a dual task
COSYNE 2023
Learning representations of environmental priors in visual working memory
COSYNE 2023
Maintenance of the timing information in olfactory working memory by global activity waves
COSYNE 2023
Phase remembers: trained RNNs develop phase-locked limit cycles in a working memory task
COSYNE 2023
Timing-dependent modulation of working memory by dopaminergic release in the prefrontal cortex
COSYNE 2023
Flexible reconfiguration of visual working memory across gaze shifts
COSYNE 2025
Harnessing cortical space for generalization in a spiking neural network of working memory
COSYNE 2025
Hierarchical Working Memory and a New Magic Number
COSYNE 2025
Inhibitory circuit synchronization drives working memory computation
COSYNE 2025
Volatile working memory representations crystallize with practice
COSYNE 2025
Analyzing error patterns in primate visuospatial working memory
FENS Forum 2024
Acute bouts of exercise in preschool children do not affect working memory capacity but accelerate the execution of the task
FENS Forum 2024
Alpha-band synchronization supports the integration of feature information in visual working memory
FENS Forum 2024
Assessing the role of transcranial direct current stimulation (tDCS) in rescuing stress-induced working memory (WM) deficits – an EEG-based study
FENS Forum 2024
Auditory cortex activity during sound memory retention in an auditory working memory task
FENS Forum 2024
Behavioral mechanisms of cognitive control in jackdaws (Corvus monedula): Investigating attention and working memory
FENS Forum 2024
Cross-cultural cognition: Working memory in South African young adults
FENS Forum 2024
Differential effects of working memory load during motor decision-making on planning and execution of goal-directed pointing movements
FENS Forum 2024