Data Spaces
data spaces
Data spaces: category (sheaf) theory and phenomenology
In this talk, I’ll introduce the formal concept of a (pre)sheaf as data attached to a topological space. Sheaves capture the notion of patching local sources of information to form a global whole, e.g., the binding of visual features such as colour and shape. The formal theory appears to be closely related to the foundational properties asserted by the Information Integration Theory (IIT) for phenomenology. A comparison is intended to engender discussion on ways that phenomenology may benefit from a sheaf theory, or (more generally) a category theory approach.
Fast and deep neuromorphic learning with time-to-first-spike coding
Engineered pattern-recognition systems strive for short time-to-solution and low energy-to-solution characteristics. This represents one of the main driving forces behind the development of neuromorphic devices. For both them and their biological archetypes, this corresponds to using as few spikes as early as possible. The concept of few and early spikes is used as the founding principle in the time-to-first-spike coding scheme. Within this framework, we have developed a spike-timing-based learning algorithm, which we used to train neuronal networks on the mixed-signal neuromorphic platform BrainScaleS-2. We derive, from first principles, error-backpropagation-based learning in networks of leaky integrate-and-fire (LIF) neurons relying only on spike times, for specific configurations of neuronal and synaptic time constants. We explicitly examine applicability to neuromorphic substrates by studying the effects of reduced weight precision and range, as well as of parameter noise. We demonstrate the feasibility of our approach on continuous and discrete data spaces, both in software simulations and on BrainScaleS-2. This narrows the gap between previous models of first-spike-time learning and biological neuronal dynamics and paves the way for fast and energy-efficient neuromorphic applications.