Resources
Authors & Affiliations
Nikita Pospelov, Andrei Chertkov, Maxim Beketov, Ivan Oseledets, Konstantin Anokhin
Abstract
Elements of neural networks (NNs), both biological and artificial, can be described by selectivity to specific cognitive features. Revealing such features is important for understanding NNs' inner working mechanisms. For living systems, where the neural response to a stimulus is not a known function, one needs to build a feedback loop of exposing it to a set of different stimuli. The properties of such stimuli should be iteratively varied aiming in the direction of neuronal maximal response. To utilize such a loop for a biological network, one should run it quickly and efficiently, reaching the stimuli that maximize certain neurons' activation in least possible number of iterations. Here we present a framework with an effective design of such a loop, successfully testing it on an artificial spiking neural network (SNN), a model that mimics neurons in living brains. Our optimization method used for activation maximization (AM) is based on low-rank tensor train decomposition of the discretized activation function. The optimization domain was the discretized latent space of generative NNs (VQ-VAE or SN-GAN), which were used to generate optimal stimuli. We also tracked changes in the preferred stimuli of SNN neurons during network training and showed that can gradually change the concepts to which they are selective. To our knowledge, the present work is the first attempt to perform effective AM for SNNs.This work was supported by Non-Commercial Foundation for Support of Science and Education "INTELLECT" and by the Brain Program of the IDEAS Research Center.