Cookies
We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.
Dr
University of Bristol
Showing your local timezone
Schedule
Thursday, November 25, 2021
1:00 PM Europe/London
Recording provided by the organiser.
Domain
Host
Sheffield ML
Duration
70 minutes
Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.
Laurence Aitchison
Dr
University of Bristol
Contact & Resources
neuro
Digital Minds: Brain Development in the Age of Technology examines how our increasingly connected world shapes mental and cognitive health. From screen time and social media to virtual interactions, t
neuro
neuro
Alpha synuclein and Lrrk2 are key players in Parkinson's disease and related disorders, but their normal role has been confusing and controversial. Data from acute gene-editing based knockdown, follow