← Back

Plasticity Rules

Topic spotlight
TopicWorld Wide

plasticity rules

Discover seminars, jobs, and research tagged with plasticity rules across World Wide.
29 curated items14 Seminars14 ePosters1 Position
Updated about 19 hours ago
29 items · plasticity rules
29 results
SeminarNeuroscience

Learning and Memory

Nicolas Brunel, Ashok Litwin-Kumar, Julijana Gjeorgieva
Duke University; Columbia University; Technical University Munich
Nov 28, 2024

This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.

SeminarNeuroscience

Meta-learning functional plasticity rules in neural networks

Tim Vogels
Institute of Science and Technology (IST), Klosterneuburg, Austria
Jan 17, 2023

Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarNeuroscienceRecording

Associative memory of structured knowledge

Julia Steinberg
Princeton University
Oct 25, 2022

A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.

SeminarNeuroscienceRecording

Dynamics of networks with plasticity rules inferred from data

Nicolas Brunel
Duke University, Durham
Apr 24, 2022
SeminarNeuroscience

A nonlinear shot noise model for calcium-based synaptic plasticity

Bin Wang
Aljadeff lab, University of California San Diego, USA
Dec 8, 2021

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

SeminarNeuroscience

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

Tatjana Tchumatcheko
University of Bonn
Nov 9, 2021

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

SeminarNeuroscienceRecording

Norse: A library for gradient-based learning in Spiking Neural Networks

Jens Egholm Pedersen
KTH Royal Institute of Technology
Nov 2, 2021

We introduce Norse: An open-source library for gradient-based training of spiking neural networks. In contrast to neuron simulators which mainly target computational neuroscientists, our library seamlessly integrates with the existing PyTorch ecosystem using abstractions familiar to the machine learning community. This has immediate benefits in that it provides a familiar interface, hardware accelerator support and, most importantly, the ability to use gradient-based optimization. While many parallel efforts in this direction exist, Norse emphasizes flexibility and usability in three ways. Users can conveniently specify feed-forward (convolutional) architectures, as well as arbitrarily connected recurrent networks. We strictly adhere to a functional and class-based API such that neuron primitives and, for example, plasticity rules composes. Finally, the functional core API ensures compatibility with the PyTorch JIT and ONNX infrastructure. We have made progress to support network execution on the SpiNNaker platform and plan to support other neuromorphic architectures in the future. While the library is useful in its present state, it also has limitations we will address in ongoing work. In particular, we aim to implement event-based gradient computation, using the EventProp algorithm, which will allow us to support sparse event-based data efficiently, as well as work towards support of more complex neuron models. With this library, we hope to contribute to a joint future of computational neuroscience and neuromorphic computing.

SeminarNeuroscienceRecording

Deriving local synaptic learning rules for efficient representations in networks of spiking neurons

Viola Priesemann
Max Planck Institute for Dynamics and Self-Organization
Nov 1, 2021

How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.

SeminarNeuroscienceRecording

Interacting synapses stabilise both learning and neuronal dynamics in biological networks

Tim Vogels
IST Austria
Mar 2, 2021

Distinct synapses influence one another when they undergo changes, with unclear consequences for neuronal dynamics and function. Here we show that synapses can interact such that excitatory currents are naturally normalised and balanced by inhibitory inputs. This happens when classical spike-timing dependent synaptic plasticity rules are extended by additional mechanisms that incorporate the influence of neighbouring synaptic currents and regulate the amplitude of efficacy changes accordingly. The resulting control of excitatory plasticity by inhibitory activation, and vice versa, gives rise to quick and long-lasting memories as seen experimentally in receptive field plasticity paradigms. In models with additional dendritic structure, we observe experimentally reported clustering of co-active synapses that depends on initial connectivity and morphology. Finally, in recurrent neural networks, rich and stable dynamics with high input sensitivity emerge, providing transient activity that resembles recordings from the motor cortex. Our model provides a general framework for codependent plasticity that frames individual synaptic modifications in the context of population-wide changes, allowing us to connect micro-level physiology with behavioural phenomena.

SeminarNeuroscience

A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network

Tim Vogels
Institute of Science and Technology in Austria
Jan 24, 2021
SeminarNeuroscienceRecording

Distinct synaptic plasticity mechanisms determine the diversity of cortical responses during behavior

Michele Insanally
University of Pittsburgh School of Medicine
Jan 14, 2021

Spike trains recorded from the cortex of behaving animals can be complex, highly variable from trial to trial, and therefore challenging to interpret. A fraction of cells exhibit trial-averaged responses with obvious task-related features such as pure tone frequency tuning in auditory cortex. However, a substantial number of cells (including cells in primary sensory cortex) do not appear to fire in a task-related manner and are often neglected from analysis. We recently used a novel single-trial, spike-timing-based analysis to show that both classically responsive and non-classically responsive cortical neurons contain significant information about sensory stimuli and behavioral decisions suggesting that non-classically responsive cells may play an underappreciated role in perception and behavior. We now expand this investigation to explore the synaptic origins and potential contribution of these cells to network function. To do so, we trained a novel spiking recurrent neural network model that incorporates spike-timing-dependent plasticity (STDP) mechanisms to perform the same task as behaving animals. By leveraging excitatory and inhibitory plasticity rules this model reproduces neurons with response profiles that are consistent with previously published experimental data, including classically responsive and non-classically responsive neurons. We found that both classically responsive and non-classically responsive neurons encode behavioral variables in their spike times as seen in vivo. Interestingly, plasticity in excitatory-to-excitatory synapses increased the proportion of non-classically responsive neurons and may play a significant role in determining response profiles. Finally, our model also makes predictions about the synaptic origins of classically and non-classically responsive neurons which we can compare to in vivo whole-cell recordings taken from the auditory cortex of behaving animals. This approach successfully recapitulates heterogeneous response profiles measured from behaving animals and provides a powerful lens for exploring large-scale neuronal dynamics and the plasticity rules that shape them.

ePoster

Knocking out co-active plasticity rules in neural networks reveals synapse type-specific contributions for learning and memory

Zoe Harrington, Basile Confavreux, Pedro Gonçalves, Jakob Macke, Tim Vogels

Bernstein Conference 2024

ePoster

A family of synaptic plasticity rules based on spike times produces a diversity of triplet motifs in recurrent networks

Claudia Cusseddu, Dylan Festa, Christoph Miehl, Julijana Gjorgjieva

Bernstein Conference 2024

ePoster

Adversarial learning of plasticity rules

COSYNE 2022

ePoster

Neuromodulation of synaptic plasticity rules avoids homeostatic reset of synaptic weights during switches in brain states

COSYNE 2022

ePoster

Neuromodulation of synaptic plasticity rules avoids homeostatic reset of synaptic weights during switches in brain states

COSYNE 2022

ePoster

Supervised learning and interpretation of plasticity rules in spiking neural networks

COSYNE 2022

ePoster

Supervised learning and interpretation of plasticity rules in spiking neural networks

COSYNE 2022

ePoster

Paradoxical self-sustained dynamics emerge from orchestrated excitatory and inhibitory homeostatic plasticity rules

Saray Soldado-Magraner, Michael J. Seay, Rodrigo Laje, Dean Buonomano

COSYNE 2023

ePoster

A combination of plasticity rules underlies learning of flexible goal-directed behaviors.

Shiva Azizpour Lindi, Vatsalya Chaubey, Arseny Finkelstein, Johnatan Aljadeff

COSYNE 2025

ePoster

Discovering plasticity rules that organize and maintain neural circuits

David Bell, Alison Duffy, Adrienne Fairhall

COSYNE 2025

ePoster

A family of synaptic plasticity rules shapes triplet motifs in recurrent networks

Claudia Cusseddu, Dylan Festa, Christoph Miehl, Julijana Gjorgjieva

COSYNE 2025

ePoster

Memory as a byproduct of stability through hysteresis: Distilling meta-learned plasticity rules

Basile Confavreux, Tim Vogels, Andrew Saxe

COSYNE 2025

ePoster

Systematic analysis of meta-learned synaptic plasticity rules reveals degeneracy and fragility

Jan-Erik Huehne, Nikos Malakasis, Dylan Festa, Julijana Gjorgjieva

COSYNE 2025

ePoster

Memories by a thousand rules: Meta-learning plasticity rules for memory formation and recall in large spiking networks

Basile Confavreux, Poornima Ramesh, Pedro J. Gonçalves, Jakob H. Macke, Tim P. Vogels

FENS Forum 2024