Resources
Authors & Affiliations
Kensuke Yoshida, Taro Toyoizumi
Abstract
Animals make decisions based on high-dimensional sensory inputs. Obtaining their low-dimensional disentangled representations, ideally in an unsupervised manner, is essential for straightforward downstream information processing. Although the machine learning field has developed useful nonlinear dimensionality reduction methods such as t-distributed stochastic neighbor embedding (t-SNE) [1], their implementation in simple biological circuits remains unclear. Here, we develop a biologically plausible dimensionality reduction algorithm compatible with t-SNE. The proposed algorithm, named Hebbian t-SNE, is implemented in a simple three-layer feedforward neural network consisting of a large number of middle-layer neurons with sparse activities, inspired by the Drosophila olfactory circuit. The synaptic weights from the middle to output layers are updated according to a three-factor Hebbian learning rule so that similar (high-dimensional) inputs evoke similar (low-dimensional) outputs and vice versa. Hebbian t-SNE effectively obtains low-dimensional representations of input clusters, comparable to t-SNE, in datasets such as entangled rings and MNIST. We then explore the possibility that Hebbian t-SNE could be implemented in Drosophila olfactory circuits by analyzing experimental data from previous studies [2, 3]. First, odor functional groups based on chemical structures [2] are better captured in the representation learned by Hebbian t-SNE than by principal component analysis (PCA). Next, the representation obtained by applying Hebbian t-SNE to the activities of projection neurons aligns more closely with the behaviorally measured valence index [3] than the PCA. We further show that Hebbian t-SNE is useful for learning the association between inputs and rewards. Hebbian t-SNE separates rewarded and non-rewarded patterns in a low-dimensional space well reflecting the original input structures, thereby endowing good generalization for inputs that are not yet associated with rewards. In summary, we argue that nonlinear dimensionality reduction learning, such as t-SNE, can take place in a biological circuit.