Resources
Authors & Affiliations
Luca Baroni, Martin Picek, Saumil Patel, Andreas S. Tolias, Ján Antolík
Abstract
Cortical visual prostheses are designed to restore vision by stimulating neurons, inducing visual perceptions in people with impaired vision. To obtain useful visual perception, it is essential to induce, through stimulation, activity that closely mimics that of natural vision. Existing V1 prostheses stimulation strategies primarily focus on retinotopic stimulation, overlooking other important encoding properties and thereby failing to replicate the complex neural activity patterns of natural vision. Our work introduces a novel stimulation protocol that targets cortical tissue according to both retinotopy and orientation columns. To obtain this protocol we implemented a specialized artificial neural network, a bottlenecked rotationally equivariant convolutional neural network, that learns how to predict neural responses to arbitrary stimuli solely based on their receptive field position and orientation preference. Our model outperforms classical models of V1 cells such as energy models, and the high correlation between target and predicted responses suggests that position and orientation alone can explain a large portion of V1 neural response variability. We tested our stimulation protocol and compared it against current retinotopic-based strategies within a previously published simulation framework composed of a large-scale spiking V1 model and model of optogenetic prosthetic stimulation delivered via an LED array placed on the cortical surface. Using different types of stimuli, the results show that our retinotopic-and-orientation-based strategy outperforms retinotopic-only stimulation strategies, recruiting neural patterns that more accurately mimic natural vision processing.