Resources
Authors & Affiliations
Matthew Creamer,Kevin Chen,Andrew M Leifer,Jonathan Pillow
Abstract
A fundamental goal in neuroscience is to connect an animal's behavior to its neural activity. However, imaging neural activity in a behaving animal presents unique challenges because the animal's movements create motion artifacts that, in the worst cases, cannot be distinguished from neural signals of interest. One approach to mitigating motion artifacts is to image two channels simultaneously: one that captures a calcium sensitive fluorophore, such as GCaMP, and another that captures a calcium insensitive fluorophore such as RFP. In principle, because the calcium insensitive channel contains the same motion artifacts as the calcium sensitive channel, but no neural signals, it can be used to correct for motion artifacts. In practice, existing approaches such as taking the ratio of the two channels do not satisfactorily mitigate all motion artifacts. Moreover, no systematic comparison has been made of existing approaches that utilize two-channel signals. Here, we construct a generative model of the fluorescence of the two channels as a function of motion, neural activity, and noise. We then use Bayesian inference to infer the latent neural activity, uncontaminated by motion artifact. We further present a novel method for evaluating ground-truth performance by attempting to decode behavior from calcium recordings in moving animals. We compare recordings of freely moving C. elegans that express GCaMP to that of control animals that lack GCaMP. Our insight is that a successful method should not only decode neural signals well, but should also eliminate decodable motion artifacts from recordings that have no neural activity. We use this method to systematically compare five models for removing motion artifacts and find that our model decodes locomotion from a GCaMP expressing animal 12x more accurately than from control, outperforming all other methods tested by a factor of 4.