← Back

Synaptic Weights

Topic spotlight
TopicWorld Wide

synaptic weights

Discover seminars, jobs, and research tagged with synaptic weights across World Wide.
13 curated items10 Seminars3 ePosters
Updated about 3 years ago
13 items · synaptic weights
13 results
SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 8, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscienceRecording

Robustness in spiking networks: a geometric perspective

Christian Machens
Champalimaud Center, Lisboa
Feb 15, 2022

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

SeminarNeuroscience

Rules for distributing synaptic weights in hippocampal neurons

Yukiko Goda
Center for Brain Science, RIKEN, Japan
Jun 23, 2021
SeminarNeuroscienceRecording

Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses

Willem Wybo
Morrison lab, Forschungszentrum Jülich, Germany
Jun 9, 2021

There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.

SeminarNeuroscienceRecording

A theory for Hebbian learning in recurrent E-I networks

Samuel Eckmann
Gjorgjieva lab, Max Planck Institute for Brain Research, Frankfurt, Germany
May 19, 2021

The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.

SeminarNeuroscience

Generalizing theories of cerebellum-like learning

Ashok Litwin Kumar
Columbia University
Mar 18, 2021

Since the theories of Marr, Ito, and Albus, the cerebellum has provided an attractive well-characterized model system to investigate biological mechanisms of learning. In recent years, theories have been developed that provide a normative account for many features of the anatomy and function of cerebellar cortex and cerebellum-like systems, including the distribution of parallel fiber-Purkinje cell synaptic weights, the expansion in neuron number of the granule cell layer and their synaptic in-degree, and sparse coding by granule cells. Typically, these theories focus on the learning of random mappings between uncorrelated inputs and binary outputs, an assumption that may be reasonable for certain forms of associative conditioning but is also quite far from accounting for the important role the cerebellum plays in the control of smooth movements. I will discuss in-progress work with Marjorie Xie, Samuel Muscinelli, and Kameron Decker Harris generalizing these learning theories to correlated inputs and general classes of smooth input-output mappings. Our studies build on earlier work in theoretical neuroscience as well as recent advances in the kernel theory of wide neural networks. They illuminate the role of pre-expansion structures in processing input stimuli and the significance of sparse granule cell activity. If there is time, I will also discuss preliminary work with Jack Lindsey extending these theories beyond cerebellum-like structures to recurrent networks.

SeminarNeuroscienceRecording

A geometric framework to predict structure from function in neural networks

James Fitzgerald
Janelia Research Campus
Feb 2, 2021

The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.

SeminarNeuroscience

Who can turn faster? Comparison of the head direction circuit of two species

Ioannis Pisokas
University of Edinburgh
Jul 19, 2020

Ants, bees and other insects have the ability to return to their nest or hive using a navigation strategy known as path integration. Similarly, fruit flies employ path integration to return to a previously visited food source. An important component of path integration is the ability of the insect to keep track of its heading relative to salient visual cues. A highly conserved brain region known as the central complex has been identified as being of key importance for the computations required for an insect to keep track of its heading. However, the similarities or differences of the underlying heading tracking circuit between species are not well understood. We sought to address this shortcoming by using reverse engineering techniques to derive the effective underlying neural circuits of two evolutionary distant species, the fruit fly and the locust. Our analysis revealed that regardless of the anatomical differences between the two species the essential circuit structure has not changed. Both effective neural circuits have the structural topology of a ring attractor with an eight-fold radial symmetry (Fig. 1). However, despite the strong similarities between the two ring attractors, there remain differences. Using computational modelling we found that two apparently small anatomical differences have significant functional effect on the ability of the two circuits to track fast rotational movements and to maintain a stable heading signal. In particular, the fruit fly circuit responds faster to abrupt heading changes of the animal while the locust circuit maintains a heading signal that is more robust to inhomogeneities in cell membrane properties and synaptic weights. We suggest that the effects of these differences are consistent with the behavioural ecology of the two species. On the one hand, the faster response of the ring attractor circuit in the fruit fly accommodates the fast body saccades that fruit flies are known to perform. On the other hand, the locust is a migratory species, so its behaviour demands maintenance of a defined heading for a long period of time. Our results highlight that even seemingly small differences in the distribution of dendritic fibres can have a significant effect on the dynamics of the effective ring attractor circuit with consequences for the behavioural capabilities of each species. These differences, emerging from morphologically distinct single neurons highlight the importance of a comparative approach to neuroscience.

ePoster

Neuromodulation of synaptic plasticity rules avoids homeostatic reset of synaptic weights during switches in brain states

COSYNE 2022

ePoster

Neuromodulation of synaptic plasticity rules avoids homeostatic reset of synaptic weights during switches in brain states

COSYNE 2022

ePoster

Slow transition to chaos and robust reservoir computing in recurrent neural networks with heavy-tailed distributed synaptic weights

Yi Xie, Stefan Mihalas, Lukasz Kusmierz

COSYNE 2025