Resources
Authors & Affiliations
Takuma Sumi, Hideaki Yamamoto, Yuichi Katori, Koki Ito, Hideyuki Kato, Hayato Chiba, Shigeo Sato, Ayumi Hirano-Iwata
Abstract
Reservoir computing facilitates the integration of high-dimensional spatiotemporal dynamics with information processing [1]. Here we investigated the correlation between neuronal dynamics and machine learning task performance, utilizing a cultured neuronal network with a modular structure. The results show that the modularity of the neuronal network enhances classification performance in tasks involving multiple speech signals [3].Figure 1 outlines the experimental setup, employing a modular cultured neuronal network of rat cortical neurons on a patterned substrate. Neural activity was measured by Ca2+ imaging. Speech signals ("zero" and "one") were converted into patterned light signals input to neurons expressing photo-activatable channels. The output layer was trained through ridge regression, and modularity was quantified by Newman's method [4].We first investigated the relationship between modularity (Q) and the accuracy in the cultured neuronal reservoir. In the high-Q interval (>0.05), most samples showed a 100% accuracy. Next, we explored generalization performance, revealing a high accuracy with speaker switching during training and testing. This study highlights the utility of the brain's modular structure in enhancing information processing, demonstrating the adaptability of the neuronal network for generalized processing during speaker switching, with implications for neuroscientific and machine learning applications.The work was partly supported by MEXT Grant-in-Aid for Transformative Research Areas (B) "Multicellular Neurobiocomputing", JSPS KAKENHI, JST CREST, and Tohoku University RIEC Cooperative Research Project Program.[1] Maass+, Neural. Comput. 14, 2002. [2] Ju+, J. Neurosci. 35, 2015. [3] Sumi+, Proc. Natl. Acad. Sci. U.S.A. 120, 2023 [4] Newman, Phys. Rev. E, 70, 2004.