Interference
interference
Dr. Gaen Plancher
The postdoctorat is part of a project funded by the French National Research Agency (ANR). The objective of this proposal is to examine the cognitive and neuronal mechanisms of information storage in memory from the very beginning, when information is present in working memory, until the late stage of sleep-dependent long-term consolidation of this information. One feature of the project is to investigate these mechanisms in humans and in animals (rats), the animal model offering a more direct measurement of cognitive and neuronal mechanisms of memory. The project brings together specialists in neurocognitive mechanisms of memory in humans and specialists in neuronal mechanisms of memory in rats. The project of the postdoctorat per se is focused on humans. It is well acknowledged that the content of working memory is erased and reset after a short time, to prevent irrelevant information from proactively interfering with newly stored information. Gaël Malleret, Paul Salin and their colleagues (2017) recently explored these interference phenomena in rats. Surprisingly, they observed that under certain conditions (task with a high level of proactive interference), these interferences could be consolidated inlong-term-memory. A 24 hour-gap, involving sleep, known to allow consolidation processes to unfold, was a necessary and sufficient condition for the long-term proactive interference effect to occur. The objective of the postdoctorat is to better understand the impact of these interference phenomena in memory of humans. Behavioral and neuronal (EEG) data will be collected at various delays: at immediate, delayed and after an interval of sleep.
How the presynapse forms and functions”
Nervous system function relies on the polarized architecture of neurons, established by directional transport of pre- and postsynaptic cargoes. While delivery of postsynaptic components depends on the secretory pathway, the identity of the membrane compartment(s) that supply presynaptic active zone (AZ) and synaptic vesicle (SV) proteins is largely unknown. I will discuss our recent advances in our understanding of how key components of the presynaptic machinery for neurotransmitter release are transported and assembled focussing on our studies in genome-engineered human induced pluripotent stem cell-derived neurons. Specifically, I will focus on the composition and cell biological identity of the axonal transport vesicles that shuttle key components of neurotransmission to nascent synapses and on machinery for axonal transport and its control by signaling lipids. Our studies identify a crucial mechanism mediating the delivery of SV and active zone proteins to developing synapses and reveal connections to neurological disorders. In the second part of my talk, I will discuss how exocytosis and endocytosis are coupled to maintain presynaptic membrane homeostasis. I will present unpublished data regarding the role of membrane tension in the coupling of exocytosis and endocytosis at synapses. We have identified an endocytic BAR domain protein that is capable of sensing alterations in membrane tension caused by the exocytotic fusion of SVs to initiate compensatory endocytosis to restore plasma membrane area. Interference with this mechanism results in defects in the coupling of presynaptic exocytosis and SV recycling at human synapses.
An inconvenient truth: pathophysiological remodeling of the inner retina in photoreceptor degeneration
Photoreceptor loss is the primary cause behind vision impairment and blindness in diseases such as retinitis pigmentosa and age-related macular degeneration. However, the death of rods and cones allows retinoids to permeate the inner retina, causing retinal ganglion cells to become spontaneously hyperactive, severely reducing the signal-to-noise ratio, and creating interference in the communication between the surviving retina and the brain. Treatments aimed at blocking or reducing hyperactivity improve vision initiated from surviving photoreceptors and could enhance the signal fidelity generated by vision restoration methodologies.
Learning representations of specifics and generalities over time
There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.
Network inference via process motifs for lagged correlation in linear stochastic processes
A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.
Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Spatial alignment supports visual comparisons
Visual comparisons are ubiquitous, and they can also be an important source for learning (e.g., Gentner et al., 2016; Kok et al., 2013). In science, technology, engineering, and math (STEM), key information is often conveyed through figures, graphs, and diagrams (Mayer, 1993). Comparing within and across visuals is critical for gleaning insight into the underlying concepts, structures, and processes that they represent. This talk addresses how people make visual comparisons and how visual comparisons can be best supported to improve learning. In particular, the talk will present a series of studies exploring the Spatial Alignment Principle (Matlen et al., 2020), derived from Structure-Mapping Theory (Gentner, 1983). Structure-mapping theory proposes that comparisons involve a process of finding correspondences between elements based on structured relationships. The Spatial Alignment Principle suggests that spatially arranging compared figures directly – to support correct correspondences and minimize interference from incorrect correspondences – will facilitate visual comparisons. We find that direct placement can facilitate visual comparison in educationally relevant stimuli, and that it may be especially important when figures are less familiar. We also present complementary evidence illustrating the preponderance of visual comparisons in 7th grade science textbooks.
Adapt or Die: Transgenerational Inheritance of Pathogen Avoidance (or, How getting food poisoning might save your species)
Caenorhabditis elegans must distinguish pathogens from nutritious food sources among the many bacteria to which it is exposed in its environment1. Here we show that a single exposure to purified small RNAs isolated from pathogenic Pseudomonas aeruginosa (PA14) is sufficient to induce pathogen avoidance in the treated worms and in four subsequent generations of progeny. The RNA interference (RNAi) and PIWI-interacting RNA (piRNA) pathways, the germline and the ASI neuron are all required for avoidance behaviour induced by bacterial small RNAs, and for the transgenerational inheritance of this behaviour. A single P. aeruginosa non-coding RNA, P11, is both necessary and sufficient to convey learned avoidance of PA14, and its C. elegans target, maco-1, is required for avoidance. Our results suggest that this non-coding-RNA-dependent mechanism evolved to survey the microbial environment of the worm, use this information to make appropriate behavioural decisions and pass this information on to its progeny.
What are the consequences of directing attention within working memory?
The role of attention in working memory remains controversial, but there is some agreement on the notion that the focus of attention holds mnemonic representations in a privileged state of heightened accessibility in working memory, resulting in better memory performance for items that receive focused attention during retention. Closely related, representations held in the focus of attention are often observed to be robust and protected from degradation caused by either perceptual interference (e.g., Makovski & Jiang, 2007; van Moorselaar et al., 2015) or decay (e.g., Barrouillet et al., 2007). Recent findings indicate, however, that representations held in the focus of attention are particularly vulnerable to degradation, and thus, appear to be particularly fragile rather than robust (e.g., Hitch et al., 2018; Hu et al., 2014). The present set of experiments aims at understanding the apparent paradox of information in the focus of attention having a protected vs. vulnerable status in working memory. To that end, we examined the effect of perceptual interference on memory performance for information that was held within vs. outside the focus of attention, across different ways of bringing items in the focus of attention and across different time scales.
What the fluctuating impact of memory load on decision speed tells us about thinking
Previous work with complex memory span tasks, in which simple choice decisions are imposed between presentations of to-be-remembered items, shows that these secondary tasks reduce memory span. It is less clear how reconfiguring and maintaining various amounts of information affects decision speeds. We documented and replicated a non-linear effect of accumulating memory items on concurrent processing judgments, showing that this pattern could be made linear by introducing "lead-in" processing judgments prior to the start of the memory list. With lead-in judgments, there was a large and consistent cost to processing response times with the introduction of the first item in the memory list, which increased gradually per item as the list accumulated. However, once presentation of the list was complete, decision responses sped rapidly: within a few seconds, decisions were at least as fast as when remembering a single item. This pattern of findings is inconsistent with the idea that merely holding information in mind conflicts with attention-demanding decision tasks. Instead, it is possible that reconfiguring memory items for responding provokes conflict between memory and processing in complex span tasks.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Workshop: Spatial Brain Dynamics
Traditionally, the term dynamics means changes in a system evolving over time. However, in the brain action potentials propagate along axons to induce postsynaptic currents with different delays at many sites simultaneously. This fundamental computational mechanism evolves spatially to engage the neuron populations involved in brain functions. To identify and understand the spatial processing in brains, this workshop will focus on the spatial principles of brain dynamics that determine how action potentials and membrane currents propagate in the networks of neurons that brains are made of. We will focus on non-artificial dynamics, which excludes in vitro dynamics, interference, electrical and optogenetic stimulations of brains in vivo. Recent non-artificial studies of spatial brain dynamics can actually explain how sensory, motor and internal brain functions evolve. The purpose of this workshop is to discuss these recent results and identify common principles of spatial brain dynamics.
Workshop: Spatial Brain Dynamics
Traditionally, the term dynamics means changes in a system evolving over time. However, in the brain action potentials propagate along axons to induce postsynaptic currents with different delays at many sites simultaneously. This fundamental computational mechanism evolves spatially to engage the neuron populations involved in brain functions. To identify and understand the spatial processing in brains, this workshop will focus on the spatial principles of brain dynamics that determine how action potentials and membrane currents propagate in the networks of neurons that brains are made of. We will focus on non-artificial dynamics, which excludes in vitro dynamics, interference, electrical and optogenetic stimulations of brains in vivo. Recent non-artificial studies of spatial brain dynamics can actually explain how sensory, motor and internal brain functions evolve. The purpose of this workshop is to discuss these recent results and identify common principles of spatial brain dynamics.
Workshop: Spatial Brain Dynamics
Traditionally, the term dynamics means changes in a system evolving over time. However, in the brain action potentials propagate along axons to induce postsynaptic currents with different delays at many sites simultaneously. This fundamental computational mechanism evolves spatially to engage the neuron populations involved in brain functions. To identify and understand the spatial processing in brains, this workshop will focus on the spatial principles of brain dynamics that determine how action potentials and membrane currents propagate in the networks of neurons that brains are made of. We will focus on non-artificial dynamics, which excludes in vitro dynamics, interference, electrical and optogenetic stimulations of brains in vivo. Recent non-artificial studies of spatial brain dynamics can actually explain how sensory, motor and internal brain functions evolve. The purpose of this workshop is to discuss these recent results and identify common principles of spatial brain dynamics.
Recurrent network dynamics lead to interference in sequential learning
Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.
The structure of behavior entrained to long intervals
Interpretation of interval timing data generated from animal models is complicated by ostensible motivational effects which arise from the delay-to-reward imposed by interval timing tasks, as well as overlap between timed and non-timed responses. These factors become increasingly prevalent at longer intervals. To address these concerns, two adjustments to long interval timing tasks are proposed. First, subjects should be afforded with reinforced non-timing behaviors concurrent with timing. Second, subjects should initiate the onset of timed stimuli. Under these conditions, interference by extraneous behavior would be detected in the rate of concurrent non- timing behaviors, and changes in motivation would be detected in the rate at which timed stimuli are initiated. In a task with these characteristics, rats initiated a concurrent fixed-interval (FI) random-ratio (RR) schedule of reinforcement. This design facilitated response-initiated timing behavior, even at increasingly long delays. Pre-feeding manipulations revealed an effect on the number of initiated trials, but not on the timing peak function.
Mechanisms of cortical communication during decision-making
Regulation of information flow in the brain is critical for many forms of behavior. In the process of sensory based decision-making, decisions about future actions are held in memory until enacted, making them potentially vulnerable to distracting sensory input. Therefore, gating of information flow from sensory to motor areas could protect memory from interference during decision-making, but the underlying network mechanisms are not understood. I will present our recent experimental and modeling work describing how information flow from the sensory cortex can be gated by state-dependent frontal cortex dynamics during decision-making in mice. Our results show that communication between brain regions can be regulated via attractor dynamics, which control the degree of commitment to an action, and reveal a novel mechanism of gating of neural information.
Crowding and the Architecture of the Visual System
Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. I will present our work using visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. We focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. We show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, we show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. We provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, we use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. We psychophysically characterize the temporal structure of this interference and propose a simple computational model. Our results support the idea that perception is a discrete process. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations.
CRISPR-based functional genomics in iPSC-based models of brain disease
Human genes associated with brain-related diseases are being discovered at an accelerating pace. A major challenge is an identification of the mechanisms through which these genes act, and of potential therapeutic strategies. To elucidate such mechanisms in human cells, we established a CRISPR-based platform for genetic screening in human iPSC-derived neurons, astrocytes and microglia. Our approach relies on CRISPR interference (CRISPRi) and CRISPR activation (CRISPRa), in which a catalytically dead version of the bacterial Cas9 protein recruits transcriptional repressors or activators, respectively, to endogenous genes to control their expression, as directed by a small guide RNA (sgRNA). Complex libraries of sgRNAs enable us to conduct genome-wide or focused loss-of-function and gain-of-function screens. Such screens uncover molecular players for phenotypes based on survival, stress resistance, fluorescent phenotypes, high-content imaging and single-cell RNA-Seq. To uncover disease mechanisms and therapeutic targets, we are conducting genetic modifier screens for disease-relevant cellular phenotypes in patient-derived neurons and glia with familial mutations and isogenic controls. In a genome-wide screen, we have uncovered genes that modulate the formation of disease-associated aggregates of tau in neurons with a tauopathy-linked mutation (MAPT V337M). CRISPRi/a can also be used to model and functionally evaluate disease-associated changes in gene expression, such as those caused by eQTLs, haploinsufficiency, or disease states of brain cells. We will discuss an application to Alzheimer’s Disease-associated genes in microglia.
Cholinergic regulation of learning in the olfactory system
In the olfactory system, cholinergic modulation has been associated with contrast modulation and changes in receptive fields in the olfactory bulb, as well the learning of odor associations in the olfactory cortex. Computational modeling and behavioral studies suggest that cholinergic modulation could improve sensory processing and learning while preventing pro-active interference when task demands are high. However, how sensory inputs and/or learning regulate incoming modulation has not yet been elucidated. We here use a computational model of the olfactory bulb, piriform cortex (PC) and horizontal limb of the diagonal band of Broca (HDB) to explore how olfactory learning could regulate cholinergic inputs to the system in a closed feedback loop. In our model, the novelty of an odor is reflected in firing rates and sparseness of cortical neurons in response to that odor and these firing rates can directly regulate learning in the system by modifying cholinergic inputs to the system.
Flexible motor sequencing through thalamic control of cortical dynamics
The mechanisms by which neural circuits generate an extensible library of motor motifs and flexibly string them into arbitrary sequences are unclear. We developed a model in which inhibitory basal ganglia output neurons project to thalamic units that are themselves bidirectionally connected to a recurrent cortical network. During movement sequences, electrophysiological recordings of basal ganglia output neurons show sustained activity patterns that switch at the boundaries between motifs. Thus, we model these inhibitory patterns as silencing some thalamic neurons while leaving others disinhibited and free to interact with cortex during specific motifs. We show that a small number of disinhibited thalamic neurons can control cortical dynamics to generate specific motor output in a noise robust way. If the thalamic units associated with each motif are segregated, many motor outputs can be learned without interference and then combined in arbitrary orders for the flexible production of long and complex motor sequences.
Learning orthogonal working memory representations protects from interference in a dual task
COSYNE 2023
Task interference underlies the task switch cost in perceptual decision-making
COSYNE 2023
Does interference inhibition in memory entail inhibitory neurotransmission?
FENS Forum 2024
A novel interaction pattern between consecutive training experiences: long-term memory augmentation rather than interference in motor sequence learning
FENS Forum 2024