Resources
Authors & Affiliations
Domonkos Martos, Josefina Catoni, Ferenc Csikor, Balazs Meszena, Enzo Ferrante, Diego Milone, Gergo Orban, Rodrigo Echeveste
Abstract
Divisive normalization (DN), the division of a neuron’s activity by a pool of connected neurons, has been recognized as a general computational motif in the cortex. Arguments support that in the visual cortex, DN acts as a canonical computation in generative models, which delivers contextual effects and contributes to shaping response variability. Deep generative models (DGMs) have recently been adopted by neuroscience from machine vision to embrace nonlinear computations in the visual cortex. Yet, it remains unclear whether DGMs effectively integrate DN as a core computational element. We investigated Variational Autoencoders (VAEs), a powerful family of DGMs, to understand the conditions under which this hallmark neural computation, DN, appears in a model of V1. VAEs are trained through learning a pair of models, the generative model that describes how latent factors are combined to produce an image, and the recognition model that approximates probabilistic inference in the generative model. When training a standard VAE on natural images, we found limited evidence for DN. Inspired by computer vision, we introduced a subtle inductive bias in the generative component of the VAE that designates one latent variable as a scaling variable that can collectively scale the output of other latents. We found that this Scale-Mixture VAE learned a representation in which the scaling variable correlated with contrast, making the rest of the latent space more invariant to contrast-changes. More importantly, the learned recognition model displayed multiple signatures of DN. Moreover, recent arguments about the concomitant increase in DN and reduced response variability (known as quenching) in neural activity were confirmed in the Scale-Mixture VAE but not in standard VAE. Interestingly, lack of quenching in standard VAE corresponded to flawed inference. Thus, our results highlight that DN contributes to accurate inferences in VAEs, indicating a potentially more general role in machine learning applications.