Representational Capacity
representational capacity
Heterogeneity and non-random connectivity in reservoir computing
Reservoir computing is a promising framework to study cortical computation, as it is based on continuous, online processing and the requirements and operating principles are compatible with cortical circuit dynamics. However, the framework has issues that limit its scope as a generic model for cortical processing. The most obvious of these is that, in traditional models, learning is restricted to the output projections and takes place in a fully supervised manner. If such an output layer is interpreted at face value as downstream computation, this is biologically questionable. If it is interpreted merely as a demonstration that the network can accurately represent the information, this immediately raises the question of what would be biologically plausible mechanisms for transmitting the information represented by a reservoir and incorporating it in downstream computations. Another major issue is that we have as yet only modest insight into how the structural and dynamical features of a network influence its computational capacity, which is necessary not only for gaining an understanding of those features in biological brains, but also for exploiting reservoir computing as a neuromorphic application. In this talk, I will first demonstrate a method for quantifying the representational capacity of reservoirs without training them on tasks. Based on this technique, which allows systematic comparison of systems, I then present our recent work towards understanding the roles of heterogeneity and connectivity patterns in enhancing both the computational properties of a network and its ability to reliably transmit to downstream networks. Finally, I will give a brief taster of our current efforts to apply the reservoir computing framework to magnetic systems as an approach to neuromorphic computing.
Receptor Costs Determine Retinal Design
Our group is interested in discovering design principles that govern the structure and function of neurons and neural circuits. We record from well-defined neurons, mainly in flies’ visual systems, to measure the molecular and cellular factors that determine relevant measures of performance, such as representational capacity, dynamic range and accuracy. We combine this empirical approach with modelling to see how the basic elements of neural systems (ion channels, second messengers systems, membranes, synapses, neurons, circuits and codes) combine to determine performance. We are investigating four general problems. How are circuits designed to integrate information efficiently? How do sensory adaptation and synaptic plasticity contribute to efficiency? How do the sizes of neurons and networks relate to energy consumption and representational capacity? To what extent have energy costs shaped neurons, sense organs and brain regions during evolution?