ePoster

cuBNM: GPU-Accelerated Biophysical Network Modeling

Amin Saberi, Kevin Wischnewski, Kyesam Jung, Leonard Sasse, Felix Hoffstaedter, Oleksandr Popovych, Boris Bernhardt, Simon Eickhoff, Sofie Valk
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Amin Saberi, Kevin Wischnewski, Kyesam Jung, Leonard Sasse, Felix Hoffstaedter, Oleksandr Popovych, Boris Bernhardt, Simon Eickhoff, Sofie Valk

Abstract

Background: Biophysical network modeling (BNM) of the brain is a promising technique for bridging macro- and microscale levels of investigation, enabling inferences about latent features of brain activity such as excitation-inhibition balance. This approach allows personalized models of the brain to be fitted to individual subjects’ imaging data through parameter optimization [1, 2]. However, the process typically requires several thousand simulations per subject, making it computationally expensive and limiting its scalability to larger subject pools and more complex models. To address this, we present cuBNM (https://cubnm.readthedocs.io), a toolbox designed for efficient simulation and optimization of BNMs using GPUs and CPUs. Methods and Results: The core of the toolbox runs highly parallelized simulations using C++/CUDA, while the user interface, written in Python, is intuitive and allows for user control over simulation configurations (Figure 1a). The toolbox includes parameter optimization algorithms including grid search and evolutionary optimizers, such as the covariance matrix adaptation-evolution strategy (CMA-ES). Parallelization within the grid or the sequential iterations of CMAES is done at the level of simulations (across the GPU ‘blocks’), and nodes (across each block’s ‘threads’). The models can incorporate global or regional free parameters that are fit to empirical data using the provided optimization algorithms. Regional parameters can be homogeneous or vary across nodes based on a parameterized combination of fixed maps or independent free parameters for each node or group of nodes. We performed benchmark tests by running N = {1, …, $2^{15}$} parallel homogeneous simulations of the reduced Wong-Wang model [3] (duration 60s, BOLD TR 1s) on three types of GPU of Nvidia A100, GTX 1080 Ti and T4 GPUs as well as on a supercomputer multi-CPU node (128 cores) and a single core of CPU (Figure 1b). On A100 GPU, running 32768 parallel took 14m41s. The same simulations on a single-core CPU were estimated to take >7 days. Conclusion: The BNM simulations and parameter optimization can be done considerably more efficiently on GPUs compared to CPUs. Our GPU implementation of BNMs enables scaling of this approach to a higher number of subjects as well as more complex and biologically realistic models which can ultimately increase model performance and validity.

Unique ID: bernstein-24/cubnm-gpu-accelerated-biophysical-3eea5d57