Search Results for author: Ben Anson

Found 4 papers, 1 papers with code

Flexible infinite-width graph convolutional networks and the importance of representation learning

no code implementations9 Feb 2024 Ben Anson, Edward Milsom, Laurence Aitchison

A common theoretical approach to understanding neural networks is to take an infinite-width limit, at which point the outputs become Gaussian process (GP) distributed.

Graph Classification Node Classification +1

Convolutional Deep Kernel Machines

1 code implementation18 Sep 2023 Edward Milsom, Ben Anson, Laurence Aitchison

Recent work (A theory of representation learning gives a deep generalisation of kernel methods, Yang et al. 2023) modified the Neural Network Gaussian Process (NNGP) limit of Bayesian neural networks so that representation learning is retained.

Gaussian Processes Representation Learning

An Improved Variational Approximate Posterior for the Deep Wishart Process

no code implementations23 May 2023 Sebastian Ober, Ben Anson, Edward Milsom, Laurence Aitchison

When the distribution is chosen to be Wishart, the model is called a deep Wishart process (DWP).

A theory of representation learning gives a deep generalisation of kernel methods

no code implementations30 Aug 2021 Adam X. Yang, Maxime Robeyns, Edward Milsom, Ben Anson, Nandi Schoots, Laurence Aitchison

In particular, we show that Deep Gaussian processes (DGPs) in the Bayesian representation learning limit have exactly multivariate Gaussian posteriors, and the posterior covariances can be obtained by optimizing an interpretable objective combining a log-likelihood to improve performance with a series of KL-divergences which keep the posteriors close to the prior.

Bayesian Inference Gaussian Processes +1

Cannot find the paper you are looking for? You can Submit a new open access paper.