Optimizers
optimizers
Deep kernel methods
Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.
Fundamentals of PyTorch: Building a Model Step-by-Step
In this workshop you'll learn the fundamentals of PyTorch using an incremental, from-first-principles approach. We'll start with tensors, autograd, and the dynamic computation graph, and then move on to developing and training a simple model using PyTorch's model classes, datasets, data loaders, optimizers, and more. You should be comfortable using Python, Jupyter notebooks, Google Colab, Numpy and, preferably, object oriented programming.