Resources
Authors & Affiliations
Nina Shamsi, Alec Xu, Michael Snyder, David Chavez, Aaron Rodriguez, Benjamin Roop, Adam Michaleas, Webster Guan, Kwanghun Chung, Laura Brattain, Lars Gjesteby
Abstract
Automated axon tracing via fully supervised learning requires annotated 3D brain imagery, which necessitates expertise and labor to produce. Reducing the amount of annotated training data needed for supervised algorithms for axon tracing would expedite critical brain research. In this work, we developed and integrated a self-supervised learning (SSL) algorithm trained on large unannotated data volumes into NeuroTrALE (pictured), a custom AI-driven pipeline for automatic annotation and visualization of 3D brain imagery. Our light sheet microscopy dataset consisted of stained Parvalbumin positive neurons from the globus pallidus externus (PVGPe) in 3× expanded mouse brain tissue. 3D Residual U-Nets were pretrained using a novel edge reconstruction SSL task and topology-preserving loss functions: Geometric-Assessment-driven Topological Smoothing (GATS) and Geometric Assessment-driven Skeletonization (GASK). The pretrained model was subsequently trained for the target task using GATS or Mean Squared Error (MSE). Initial results showed that increasing the SSL training crop size degrades topological connectivity for bigger crop sizes, and so a training strategy was required to maintain it for bigger crop sizes. We tested using attention either during task-only training or during both SSL and task training, and used either the encoder or decoder weight embeddings for downstream task training. We found that using decoder weights for downstream task training, a bigger SSL training crop size, and training with the GATS loss function during both SSL and task training increases topological connectivity for tracings by 4.2%. For GATS models, we found that attention during task-only training is best.© 2024 Massachusetts Institute of Technology