Experimental Results
experimental results
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Silences, Spikes and Bursts: Three-Part Knot of the Neural Code
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labeling action potentials emitted at a particularly high frequency with a metonym – bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. In this talk, I will discuss the implications of seeing the neural code as having three syllables: silences, spikes and bursts. In particular, I will describe recent theoretical and experimental results that implicate bursting in the implementation of top-down attention and the coordination of learning.
Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity
A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions
Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Learning in/about/from the basal ganglia
The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed and computational results using an upward mapping to identify putative low-dimensional control ensembles that may be involved in tuning decision policy. I will also present some recent experimental results and theory – related to effects of extracellular ion dynamics -- that run counter to the classical view of basal ganglia pathways and suggest a new interpretation of certain aspects of this framework. For those not so interested in the basal ganglia, I hope that the upward mapping approach and impact of extracellular ion dynamics will nonetheless be of interest!
Meta-learning synaptic plasticity and memory addressing for continual familiarity detection
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.
Orbitofrontal cortex and the integrative approach to functional neuroanatomy
The project of functional neuroanatomy typically considers single brain areas as the core functional unit of the brain. Functional neuroanatomists typically use specialized tasks that are designed to isolate hypothesized functions from other cognitive processes. Our lab takes a broader view; specifically, we consider brain regions as parts of larger circuits and we take cognitive processes as part of more complex behavioral repertoires. In my talk, I will discuss the ramifications of this perspective for thinking about the role of the orbitofrontal cortex. I will discuss results of recent experiments from my lab that tackle the question of OFC function within the context of larger brain networks and in freely moving foraging tasks. I will argue that this perspective challenges conventional accounts of the role of OFC and invites new ones. I will conclude by speculating on implications for the practice of functional neuroanatomy.
The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models
Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features. As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110
Network dynamics in the basal ganglia and possible implications for Parkinson’s disease
The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed. I will also present some recent experimental results and theory that call for a re-evaluation of certain aspects of this framework. Next, I will turn to the changes in basal ganglia activity observed to occur with the dopamine depletion associated with Parkinson’s disease. I will discuss some of the potential functional implications of some of these changes and, if time permits, will conclude with some new results that focus on delta oscillations under dopamine depletion.
A theory for Hebbian learning in recurrent E-I networks
The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.
Neural circuit parameter variability, robustness, and homeostasis
Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.
Vector addition in the navigational circuits of the fly
In a cross wind, the direction a fly moves through the air may differ from its heading direction, the direction defined by its body axis. I will present a model based on experimental results that reveals how a heading direction “compass” signal is combined with optic flow to compute and represent the direction that a fly is traveling. This provides a general framework for understand how flies perform vector computations.
Analogies, Games and the Learning of Mathematics
Research on analogical processing and reasoning has provided strong evidence that the use of adequate educational analogies has strong and positive effects on the learning of mathematics. In this talk I will show some experimental results suggesting that analogies based on spatial representations might be particularly effective to improve mathematics learning. Since fostering mathematics learning also involves addressing psychosocial factors such as the development of mathematical anxiety, providing social incentives to learn, and fostering engagement and motivation, I will argue that one area to explore with great potential to improve math learning is applying analogical research in the development of learning games aimed to improve math learning. Finally, I will show some early prototypes of an educational project devoted to developing games designed to foster the learning of early mathematics in kindergarten children.
Differential Resilience of Neurons and Networks with Similar Behavior to Perturbation
Both computational and experimental results in single neurons and small networks demonstrate that very similar network function can result from quite disparate sets of neuronal and network parameters. Using the crustacean stomatogastric nervous system, we study the influence of these differences in underlying structure on differential resilience of individuals to a variety of environmental perturbations, including changes in temperature, pH, potassium concentration and neuromodulation. We show that neurons with many different kinds of ion channels can smoothly move through different mechanisms in generating their activity patterns, thus extending their dynamic range.
Differential Resilience of Neurons and Networks with Similar Behavior to Perturbation. (Simultaneous translation to Spanish)
Both computational and experimental results in single neurons and small networks demonstrate that very similar network function can result from quite disparate sets of neuronal and network parameters. Using the crustacean stomatogastric nervous system, we study the influence of these differences in underlying structure on differential resilience of individuals to a variety of environmental perturbations, including changes in temperature, pH, potassium concentration and neuromodulation. We show that neurons with many different kinds of ion channels can smoothly move through different mechanisms in generating their activity patterns, thus extending their dynamic range. The talk will be simultaneously translated to spanish by the interpreter Liliana Viera, MSc. Los resultados tanto computacionales como experimentales en neuronas individuales y redes pequeñas demuestran que funcionamientos de redes muy similares pueden pueden resultar de conjuntos bastante dispares de parámetros neuronales y de las redes. Utilizando el sistema nervioso estomatogástrico de los crustáceos, estudiamos la influencia de estas diferencias en la estructura subyacente en la resistencia diferencial de los individuos a una variedad de perturbaciones ambientales, incluidos los cambios de temperatura, pH, concentración de potasio y neuromodulación. Mostramos que neuronas con muchos tipos diferentes de canales iónicos pueden moverse suavemente a través de diferentes mecanismos para generar sus patrones de actividad, extendiendo así su rango dinámico. La conferencia será traducida simultáneamente al español por la intérprete Liliana Viera MSc.