Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
Back to SeminarsBack
Seminar✓ Recording AvailableNeuroscience

Online Training of Spiking Recurrent Neural Networks​ With Memristive Synapses

Yigit Demirag

Institute of Neuroinformatics

Schedule
Wednesday, July 6, 2022

Showing your local timezone

Schedule

Wednesday, July 6, 2022

5:00 PM Europe/Berlin

Watch recording
Host: SNUFA

Access Seminar

Meeting Password

$Em4HF

Use this password when joining the live session

Watch the seminar

Recording provided by the organiser.

Event Information

Domain

Neuroscience

Original Event

View source

Host

SNUFA

Duration

30 minutes

Abstract

Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.

Topics

PyTorch simulationbio-plausible learningdevice non-idealitiese-prop learning rulein-memory computingmemristive synapsesmemristorneuromorphicneuromorphic hardwaretemporal dynamicsweight update schemes

About the Speaker

Yigit Demirag

Institute of Neuroinformatics

Contact & Resources

@yigitdemirag

Follow on Twitter/X

twitter.com/yigitdemirag

Related Seminars

Seminar60%

Knight ADRC Seminar

neuro

Jan 20, 2025
Washington University in St. Louis, Neurology
Seminar60%

TBD

neuro

Jan 20, 2025
King's College London
Seminar60%

Guiding Visual Attention in Dynamic Scenes

neuro

Jan 20, 2025
Haifa U
January 2026
Full calendar →