Resources
Authors & Affiliations
Catherine Schöfmann, Jan Finkbeiner, Susanne Kunkel
Abstract
Since the popularization of GPUs for machine learning (ML) workloads, several dedicated accelerator chips have emerged, offering architectures optimized for common ML operations and requirements. The types of tasks targeted by such platforms however - often sparse and highly parallel - are not confined to the realm of traditional layer-based learning. Established simulators for large-scale spiking networks with biologically plausible connectivity and synaptic density have historically targeted CPUs, with GPU support and ports being a relatively recent development[1][2].
Here, we present a working solution for simulations of current-based leaky integrate-and-fire spiking dynamics inspired by the simulator NEST (www.nest-simulator.org) on Graphcore's IPU (www.graphcore.ai) and analyse the performance. More generally, we implement a communication algorithm suited for distributed and constrained memory environments, capable of reaching competitive acceleration while allowing a controlled trade-off between performance and memory usage. We benchmark existing milestone models such as the cortical microcircuit[3], a model integrating experimental connectivity data for 1 mm² of cortical tissue, spanning around 300 million synapses, which is considered a building block of brain function. Results are validated by comparing spike timings for a fixed connection mapping with a NEST simulation of the same model.
Applying accelerators to bio-plausible simulations of spiking networks and understanding the limitations stands to advance research in both fields, neuroscience and brain-inspired computing. Increasing the efficiency of large-scale network simulation pushes the boundaries of what scale of models of neuronal systems we can reasonably study, in turn providing novel insights in neuroscience. Similarly, addressing today's technological challenges of network simulations with bio-plausible connectivity and synaptic density may pave the way for efficient solutions for future brain-inspired computing. Finally, we believe that our communication pattern can generalize to spiking simulations on different target platforms and inform future hardware design.