ePoster

Hierarchical structure of combinatorial code optimizes representation in spiking neural networks.

Trevor McPherson, Brad Thielman, Tim Gentner
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Trevor McPherson, Brad Thielman, Tim Gentner

Abstract

Relational invariance---a network’s ability to capture the geometric relationships between input patterns abstracted away from explicit external variables---has the potential to enable robustness and generalizability in neural networks. Previous work from our group has shown that a combinatorial neuronal code preserves the relational geometry of acoustic stimuli in the songbird brain. Here we show that combinatorial codes in autoencoding spiking neural networks (SNNs) can maintain relational invariance and display a hierarchical structure that optimizes the network’s internal representation. By fitting receptive fields (RFs) to SNN neurons and integrating them into a probabilistic population model, we reconstruct the most likely inputs from a given combinatorial “codeword” derived from spiking co-activation across the population at a point in time. We then show that the combinatorial distances between codewords---defined by a topological analysis of simplicial complexes generated from sets of codewords---correlate positively with the distances between their respective reconstructed inputs, thereby capturing the relational geometry between input patterns. We examine the hierarchical structure of combinatorial codes in SNNs by embedding codeword distance matrices with hyperbolic Bayesian multidimensional scaling (BHMDS). BHMDS learns the optimal embedding curvature, where more curvature indicates more hierarchical structure within the code. We trained and tested SNNs on inputs that were correlated to varying degrees, allowing us to test how learned relationships between inputs influence combinatorial code structure. Training organizes the codes hierarchically for a given degree of input correlation, while testing on more (or less) correlated inputs produced less hierarchical structure. Hierarchical codes contained fewer spikes, demonstrating how training optimizes SNNs to efficiently represent inputs with an expected amount of correlation. We posit that the hierarchical organization of combinatorial codes is a signature of optimal relational representation in the brain, linking emergent coding schemes in biological and artificial SNNs.

Unique ID: cosyne-25/hierarchical-structure-combinatorial-320c76d4