Topic spotlight
TopicWorld Wide

CUDA

Discover seminars, jobs, and research tagged with CUDA across World Wide.
3 curated items2 Seminars1 Position
Updated 1 day ago
3 items · CUDA
3 results
Position

N/A

Istituto Italiano di Tecnologia
Genoa, Via Enrico Melen 83
Dec 5, 2025

You will be working in the Pattern Analysis and Computer vision (PAVIS) Research Line, a multi-disciplinary and multi-cultural group where people with different backgrounds collaborate, each with their own expertise, to carry out the research on Computer Vision and Artificial Intelligence. PAVIS research line is coordinated by Dr. Alessio Del Bue. Within the team, your main responsibilities will be: Hardware and software prototyping of computational systems based on Computer Vision and Machine Learning technology; Support PAVIS facility maintenance and organization; Support PAVIS Technology Transfer initiatives (external projects); Support PAVIS researcher activities; Support PAVIS operations (procurement, ICT services, troubleshooting, data management, logistics, equipment management and maintenance).

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.