Sample Efficiency
sample efficiency
Stefan Mihalas
Biological systems learn differently than current machine learning systems, with generally higher sample efficiency but also strong inductive biases. The scientist will explore the effects which bio-realistic neurons, plasticity rules and architectures have on learning in artificial neural networks. This will be done by combining construction of artificial neural network with bio-inspired constraints.
A function approximation perspective on neural representations
Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.