ePosterDOI Available

Self-supervised learning using Geometric Assessment-driven Topological Smoothing (GATS) for neuron tracing and Active Learning Environment (NeuroTrALE)

Nina Shamsiand 10 co-authors
FENS Forum 2024 (2024)
Messe Wien Exhibition & Congress Center, Vienna, Austria

Presentation

Date TBA

Poster preview

Self-supervised learning using Geometric Assessment-driven Topological Smoothing (GATS) for neuron tracing and Active Learning Environment (NeuroTrALE) poster preview

Event Information

Abstract

Automated axon tracing via fully supervised learning requires annotated 3D brain imagery, which necessitates expertise and labor to produce. Reducing the amount of annotated training data needed for supervised algorithms for axon tracing would expedite critical brain research. In this work, we developed and integrated a self-supervised learning (SSL) algorithm trained on large unannotated data volumes into NeuroTrALE (pictured), a custom AI-driven pipeline for automatic annotation and visualization of 3D brain imagery. Our light sheet microscopy dataset consisted of stained Parvalbumin positive neurons from the globus pallidus externus (PVGPe) in 3× expanded mouse brain tissue. 3D Residual U-Nets were pretrained using a novel edge reconstruction SSL task and topology-preserving loss functions: Geometric-Assessment-driven Topological Smoothing (GATS) and Geometric Assessment-driven Skeletonization (GASK). The pretrained model was subsequently trained for the target task using GATS or Mean Squared Error (MSE). Initial results showed that increasing the SSL training crop size degrades topological connectivity for bigger crop sizes, and so a training strategy was required to maintain it for bigger crop sizes. We tested using attention either during task-only training or during both SSL and task training, and used either the encoder or decoder weight embeddings for downstream task training. We found that using decoder weights for downstream task training, a bigger SSL training crop size, and training with the GATS loss function during both SSL and task training increases topological connectivity for tracings by 4.2%. For GATS models, we found that attention during task-only training is best.© 2024 Massachusetts Institute of Technology

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.