Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
← Back

Efficient Gpu Training Snns

Back to SeminarsBack
Seminar✓ Recording AvailableNeuroscience

Efficient GPU training of SNNs using approximate RTRL

James Knight

Dr

University of Sussex

Schedule
Wednesday, November 3, 2021

Showing your local timezone

Schedule

Wednesday, November 3, 2021

6:15 PM Europe/Berlin

Watch recording
Host: SNUFA

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

Watch the seminar

Recording provided by the organiser.

Event Information

Format

Recorded Seminar

Recording

Available

Host

SNUFA

Duration

70.00 minutes

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

World Wide map

Abstract

Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].

Topics

BPTTGPUGPU trainingGeNNGeNN simulatorRTRLclassification performanceePropeProp learning ruleenergy consumptionenergy efficiencyinferenceparallel batch trainingspiking neural networks

About the Speaker

James Knight

Dr

University of Sussex

Contact & Resources

Personal Website

scholar.google.co.uk/citations

@neworderofjamie

Follow on Twitter/X

twitter.com/neworderofjamie

Related Seminars

Seminar64% match - Relevant

Rethinking Attention: Dynamic Prioritization

neuro

Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory p

Jan 6, 2025
George Washington University
Seminar64% match - Relevant

The Cognitive Roots of the Problem of Free Will

neuro

Jan 7, 2025
Bielefeld & Amsterdam
Seminar64% match - Relevant

Memory Colloquium Lecture

neuro

Jan 8, 2025
Keio University, Tokyo
World Wide calendar

World Wide highlights

December 2025 • Syncing the latest schedule.

View full calendar
Awaiting featured picks
Month at a glance

Upcoming highlights