Resources
Authors & Affiliations
Yongrong Qiu,David Klindt,Klaudia Szatko,Laura Busse,Matthias Bethge,Thomas Euler
Abstract
Neural system identification aims at learning the response function of neurons to arbitrary stimuli by incorporating the right assumptions into the model that facilitate generalization beyond the particular stimuli used during training [1]. Here, we present normative network regularization as a novel regularization tool that allows to flexibly impose prior assumptions into the model training. In particular, we use this approach to incorporate the efficient coding hypothesis as a regularizer which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources [2]. Using this approach we explore if natural input statistics could help to improve predictive performance of neural responses. To this end, we regularized the filters of a system identification model with a normative efficient coding model, to predict the responses of retinal neurons to noise stimuli. By forcing both models to share convolutional filters, we found a synergy between neural system identification and efficient coding. As a result, the normative regularization approach did not only yield a higher performance than the “stand-alone” system identification model, it also produced more biologically-plausible filters. We found these results to be consistent for different stimuli and across model architectures. Moreover, our normatively regularized models performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. In summary, our results demonstrate how the efficient coding hypothesis can be successfully leveraged as normative regularization for the identification of neural response properties.