Resources
Authors & Affiliations
Joel Bauer, Troy W. Margrie, Claudia Clopath
Abstract
The ability to reconstruct imagery represented by the brain has the potential to give us an intuitive understanding of what the brain “sees”. Reconstruction of visual input from human fMRI data has garnered significant attention in recent years. Comparatively less focus has been directed towards vision reconstruction from single-cell recordings, despite its potential to provide a more direct measure of the information represented by the brain. Here, we achieve high-quality reconstructions of videos presented to mice, from the activity of neurons in their visual cortex. With our method of video optimization using a pre-trained state-of-the-art dynamic neural encoding model we reliably reconstruct 10-second natural movies at 30 Hz from two-photon calcium imaging data. We achieve a $\approx$ 2-fold increase in pixel-by-pixel correlation compared to previous state-of-the-art reconstructions of static images from mouse V1, while also capturing temporal dynamics. We find that the number of neurons in the population used to reconstruct the videos and averaging the reconstructions from multiple models are both critical for good performance. This paves the way for movie reconstruction to be used as a tool to investigate a variety of visual processing phenomena, such as predictive coding and selective feature attention.