Synaptic Strength
synaptic strength
Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons
Join Us for the Memory Decoding Journal Club! A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience. This time, we’re diving into a groundbreaking paper: "Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons
Sleep deprivation and the human brain: from brain physiology to cognition”
Sleep strongly affects synaptic strength, making it critical for cognition, especially learning and memory formation. Whether and how sleep deprivation modulates human brain physiology and cognition is poorly understood. Here we examined how overnight sleep deprivation vs overnight sufficient sleep affects (a) cortical excitability, measured by transcranial magnetic stimulation, (b) inducibility of long-term potentiation (LTP)- and long-term depression (LTD)-like plasticity via transcranial direct current stimulation (tDCS), and (c) learning, memory, and attention. We found that sleep deprivation increases cortical excitability due to enhanced glutamate-related cortical facilitation and decreases and/or reverses GABAergic cortical inhibition. Furthermore, tDCS-induced LTP-like plasticity (anodal) abolishes while the inhibitory LTD-like plasticity (cathodal) converts to excitatory LTP-like plasticity under sleep deprivation. This is associated with increased EEG theta oscillations due to sleep pressure. Motor learning, behavioral counterparts of plasticity, and working memory and attention, which rely on cortical excitability, are also impaired during sleep deprivation. Our study indicates that upscaled brain excitability and altered plasticity, due to sleep deprivation, are associated with impaired cognitive performance. Besides showing how brain physiology and cognition undergo changes (from neurophysiology to higher-order cognition) under sleep pressure, the findings have implications for variability and optimal application of noninvasive brain stimulation.
A Game Theoretical Framework for Quantifying Causes in Neural Networks
Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.
Input and target-selective plasticity in sensory neocortex during learning
Behavioral experience shapes neural circuits, adding and subtracting connections between neurons that will ultimately control sensation and perception. We are using natural sensory experience to uncover basic principles of information processing in the cerebral cortex, with a focus on how sensory learning can selectively alter synaptic strength. I will discuss recent findings that differentiate reinforcement learning from sensory experience, showing rapid and selective plasticity of thalamic and inhibitory synapses within primary sensory cortex.
NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule
Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.
Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models
The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
Habenular synaptic strength and neuronal dynamics for approach-avoidance behaviours
Hebbian learning, its inference, and brain oscillation
Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.
How the immune system shapes synaptic functions
The synapse is the core component of the nervous system and synapse formation is the critical step in the assembly of neuronal circuits. The assembly and maturation of synapses requires the contribution of secreted and membrane-associated proteins, with neuronal activity playing crucial roles in regulating synaptic strength, neuronal membrane properties, and neural circuit refinement. The molecular mechanisms of synapse assembly and refinement have been so far largely examined on a gene-by-gene basis and with a perspective fully centered on neuronal cells. However, in the last years, the involvement of non-neuronal cells has emerged. Among these, microglia, the resident immune cells of the central nervous system, have been shown to play a key role in synapse formation and elimination. Contacts of microglia with dendrites in the somatosensory cortex were found to induce filopodia and dendritic spines via Ca2+ and actin-dependent processes, while microglia-derived BDNF was shown to promote learning-dependent synapse formation. Microglia is also recognized to have a central role in the widespread elimination (or pruning) of exuberant synaptic connections during development. Clarifying the processes by which microglia control synapse homeostasis is essential to advance our current understanding of brain functions. Clear answers to these questions will have important implications for our understanding of brain diseases, as the fact that many psychiatric and neurological disorders are synaptopathies (i.e. diseases of the synapse) is now widely recognized. In the last years, my group has identified TREM2, an innate immune receptor with phagocytic and antiinflammatory properties expressed in brain exclusively by microglia, as essential for microglia-mediated synaptic refinement during the early stages of brain development. The talk will describe the role of TREM2 in synapse elimination and introduce the molecular actors involved. I will also describe additional pathways by which the immune system may affect the formation and homeostasis of synaptic contacts.
Neural circuit parameter variability, robustness, and homeostasis
Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.
The emergence of contrast invariance in cortical circuits
Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.
“Biophysics of Structural Plasticity in Postsynaptic Spines”
The ability of the brain to encode and store information depends on the plastic nature of the individual synapses. The increase and decrease in synaptic strength, mediated through the structural plasticity of the spine, are important for learning, memory, and cognitive function. Dendritic spines are small structures that contain the synapse. They come in a variety of shapes (stubby, thin, or mushroom-shaped) and a wide range of sizes that protrude from the dendrite. These spines are the regions where the postsynaptic biochemical machinery responds to the neurotransmitters. Spines are dynamic structures, changing in size, shape, and number during development and aging. While spines and synapses have inspired neuromorphic engineering, the biophysical events underlying synaptic and structural plasticity of single spines remain poorly understood. Our current focus is on understanding the biophysical events underlying structural plasticity. I will discuss recent efforts from my group — first, a systems biology approach to construct a mathematical model of biochemical signaling and actin-mediated transient spine expansion in response to calcium influx caused by NMDA receptor activation and a series of spatial models to study the role of spine geometry and organelle location within the spine for calcium and cyclic AMP signaling. Second, I will discuss how mechanics of membrane-cytoskeleton interactions can give insight into spine shape region. And I will conclude with some new efforts in using reconstructions from electron microscopy to inform computational domains. I will conclude with how geometry and mechanics plays an important role in our understanding of fundamental biological phenomena and some general ideas on bio-inspired engineering.
Dynamic computation in the retina by retuning of neurons and synapses
How does a circuit of neurons process sensory information? And how are transformations of neural signals altered by changes in synaptic strength? We investigate these questions in the context of the visual system and the lateral line of fish. A distinguishing feature of our approach is the imaging of activity across populations of synapses – the fundamental elements of signal transfer within all brain circuits. A guiding hypothesis is that the plasticity of neurotransmission plays a major part in controlling the input-output relation of sensory circuits, regulating the tuning and sensitivity of neurons to allow adaptation or sensitization to particular features of the input. Sensory systems continuously adjust their input-output relation according to the recent history of the stimulus. A common alteration is a decrease in the gain of the response to a constant feature of the input, termed adaptation. For instance, in the retina, many of the ganglion cells (RGCs) providing the output produce their strongest responses just after the temporal contrast of the stimulus increases, but the response declines if this input is maintained. The advantage of adaptation is that it prevents saturation of the response to strong stimuli and allows for continued signaling of future increases in stimulus strength. But adaptation comes at a cost: a reduced sensitivity to a future decrease in stimulus strength. The retina compensates for this loss of information through an intriguing strategy: while some RGCs adapt following a strong stimulus, a second population gradually becomes sensitized. We found that the underlying circuit mechanisms involve two opposing forms of synaptic plasticity in bipolar cells: synaptic depression causes adaptation and facilitation causes sensitization. Facilitation is in turn caused by depression in inhibitory synapses providing negative feedback. These opposing forms of plasticity can cause simultaneous increases and decreases in contrast-sensitivity of different RGCs, which suggests a general framework for understanding the function of sensory circuits: plasticity of both excitatory and inhibitory synapses control dynamic changes in tuning and gain.
Neuronal morphology imposes a tradeoff between stability, accuracy and efficiency of synaptic scaling
Synaptic scaling is a homeostatic normalization mechanism that preserves relative synaptic strengths by adjusting them with a common factor. This multiplicative change is believed to be critical, since synaptic strengths are involved in learning and memory retention. Further, this homeostatic process is thought to be crucial for neuronal stability, playing a stabilizing role in otherwise runaway Hebbian plasticity [1-3]. Synaptic scaling requires a mechanism to sense total neuron activity and globally adjust synapses to achieve some activity set-point [4]. This process is relatively slow, which places limits on its ability to stabilize network activity [5]. Here we show that this slow response is inevitable in realistic neuronal morphologies. Furthermore, we reveal that global scaling can in fact be a source of instability unless responsiveness or scaling accuracy are sacrificed." "A neuron with tens of thousands of synapses must regulate its own excitability to compensate for changes in input. The time requirement for global feedback can introduce critical phase lags in a neuron’s response to perturbation. The severity of phase lag increases with neuron size. Further, a more expansive morphology worsens cell responsiveness and scaling accuracy, especially in distal regions of the neuron. Local pools of reserve receptors improve efficiency, potentiation, and scaling, but this comes at a cost. Trafficking large quantities of receptors requires time, exacerbating the phase lag and instability. Local homeostatic feedback mitigates instability, but this too comes at the cost of reducing scaling accuracy." "Realization of the phase lag instability requires a unified model of synaptic scaling, regulation, and transport. We present such a model with global and local feedback in realistic neuron morphologies (Fig. 1). This combined model shows that neurons face a tradeoff between stability, accuracy, and efficiency. Global feedback is required for synaptic scaling but favors either system stability or efficiency. Large receptor pools improve scaling accuracy in large morphologies but worsen both stability and efficiency. Local feedback improves the stability-efficiency tradeoff at the cost of scaling accuracy. This project introduces unexplored constraints on neuron size, morphology, and synaptic scaling that are weakened by an interplay between global and local feedback.
Synaptic, cellular, and circuit mechanisms for learning: insights from electric fish
Understanding learning in neural circuits requires answering a number of difficult questions: (1) What is the computation being performed and what is its behavioral significance? (2) What are the inputs required for the computation and how are they represented at the level of spikes? (3) What are the sites and rules governing plasticity, i.e. how do pre and post-synaptic activity patterns produce persistent changes in synaptic strength? (4) How does network connectivity and dynamics shape the computation being performed? I will discuss joint experimental and theoretical work addressing these questions in the context of the electrosensory lobe (ELL) of weakly electric mormyrid fish.
Antiepileptic medication is associated with excitatory synaptic strengthening in pyramidal neurons of the adult human neocortex
FENS Forum 2024