Topic spotlight
TopicWorld Wide

Gpus

Discover seminars, jobs, and research tagged with Gpus across World Wide.
8 curated items4 Positions4 Seminars
Updated 1 day ago
8 items · Gpus
8 results
Position

N/A

Istituto Italiano di Tecnologia
Genoa, Via Enrico Melen 83
Dec 5, 2025

You will be working in the Pattern Analysis and Computer vision (PAVIS) Research Line, a multi-disciplinary and multi-cultural group where people with different backgrounds collaborate, each with their own expertise, to carry out the research on Computer Vision and Artificial Intelligence. PAVIS research line is coordinated by Dr. Alessio Del Bue. Within the team, your main responsibilities will be: Hardware and software prototyping of computational systems based on Computer Vision and Machine Learning technology; Support PAVIS facility maintenance and organization; Support PAVIS Technology Transfer initiatives (external projects); Support PAVIS researcher activities; Support PAVIS operations (procurement, ICT services, troubleshooting, data management, logistics, equipment management and maintenance).

Position

Dr. Robert McDougal

McDougal lab, Yale University
Yale University
Dec 5, 2025

The McDougal lab at Yale University seeks one or two highly motivated Postdoctoral Associates to advance the frontiers of neuroscience simulation. This work will be shared with the community in the form of enhancements to the NEURON simulator. The Postdoctoral Associate will lead one or more of these projects and will present this work at conferences, in publications, and in the form of open-source software. Mentorship and career development opportunities will be tailored to the Postdoctoral Associate’s interests.

Position

Prof. Dr. Yee Lee Shing, Prof. Dr. Gemma Roig

Goethe University Frankfurt
Goethe University Frankfurt, Theodor-W.- Adorno-Platz 6, 60323 Frankfurt am Main; Robert-Mayer-Straße 11 - 15, 60325 Frankfurt
Dec 5, 2025

The DFG funded project Learning From Environment Through the Eyes of Children within SPP 2431 New Data Spaces for the Social Sciences, situated at Goethe University Frankfurt, is looking for candidates for two positions: 1 PostDoc position in Psychology and 1 PhD or PostDoc position in Computer Science. The project aims to establish a new mode of data acquisition capturing young children’s first-person experience in naturalistic settings and develop AI systems to characterize the nature and complexity of these experiences. This interdisciplinary project involves collaboration between the psychology and computer science departments, contributing to the SPP programme's goals of establishing a new multimodal data approach in social science studies.

PositionComputer Science

N/A

University of Rochester
University of Rochester
Dec 5, 2025

The University of Rochester’s Department of Computer Science seeks to hire an outstanding early-career candidate in the area of Artificial Intelligence. Specifically, we are looking to hire a tenure-track Assistant Professor in any of the following areas: Learning Theory, especially related to deep learning, Machine Learning Systems (ML Ops, memory efficient training techniques, distributed model training methods with GPUs/accelerators, etc.), or Deep reinforcement learning. We are especially interested in applications of these areas to large language models. Exceptional candidates at the associate or full professor level, or in other AI research areas such as foundational research in natural language processing (NLP), are also encouraged to apply. Candidates must have (or be about to receive) a doctorate in computer science or a related discipline.

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarNeuroscienceRecording

Efficient GPU training of SNNs using approximate RTRL

James Knight
University of Sussex
Nov 2, 2021

Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].