Computational Capacity
computational capacity
Heterogeneity and non-random connectivity in reservoir computing
Reservoir computing is a promising framework to study cortical computation, as it is based on continuous, online processing and the requirements and operating principles are compatible with cortical circuit dynamics. However, the framework has issues that limit its scope as a generic model for cortical processing. The most obvious of these is that, in traditional models, learning is restricted to the output projections and takes place in a fully supervised manner. If such an output layer is interpreted at face value as downstream computation, this is biologically questionable. If it is interpreted merely as a demonstration that the network can accurately represent the information, this immediately raises the question of what would be biologically plausible mechanisms for transmitting the information represented by a reservoir and incorporating it in downstream computations. Another major issue is that we have as yet only modest insight into how the structural and dynamical features of a network influence its computational capacity, which is necessary not only for gaining an understanding of those features in biological brains, but also for exploiting reservoir computing as a neuromorphic application. In this talk, I will first demonstrate a method for quantifying the representational capacity of reservoirs without training them on tasks. Based on this technique, which allows systematic comparison of systems, I then present our recent work towards understanding the roles of heterogeneity and connectivity patterns in enhancing both the computational properties of a network and its ability to reliably transmit to downstream networks. Finally, I will give a brief taster of our current efforts to apply the reservoir computing framework to magnetic systems as an approach to neuromorphic computing.
The butterfly strikes back: neurons doing 'network' computation
We live in the age of the network: Internet social neural ecosystems. This has become one of the main metaphors for how we think about complex systems. This view also dominates the account of brain function. The role of neuronsdescribed by Cajal as the "butterflies of the soul" has become diminished to leaky integrate-and-fire point objects in many models of neural network computation. It is perhaps not surprising that networkexplanations of neural phenomena use neurons as elementary particles andascribe all their wonderful capabilities to their interactions in a network. In the network view the Connectome defines the brain and the butterflies have no role. In this talk I'd like to reclaim some key computations from the networkand return them to their rightful place at the cellular and subcellular level. I'll start with a provocative look at potential computational capacity ofdifferent kinds of brain computation: network vs. subcellular. I'll then consider different levels of pattern and sequence computationwith a glimpse of the efficiency of the subcellular solutions. Finally I propose that there is a suggestive mapping between entire nodesof deep networks to individual neurons. This in my view is how we can walk around with 1.3 litres and 20 watts of installed computational capacity still doing far more than giant AI server farms.