no code implementations • 9 Feb 2024 • Ben Anson, Edward Milsom, Laurence Aitchison
A common theoretical approach to understanding neural networks is to take an infinite-width limit, at which point the outputs become Gaussian process (GP) distributed.
1 code implementation • 18 Sep 2023 • Edward Milsom, Ben Anson, Laurence Aitchison
Recent work (A theory of representation learning gives a deep generalisation of kernel methods, Yang et al. 2023) modified the Neural Network Gaussian Process (NNGP) limit of Bayesian neural networks so that representation learning is retained.
no code implementations • 23 May 2023 • Sebastian Ober, Ben Anson, Edward Milsom, Laurence Aitchison
When the distribution is chosen to be Wishart, the model is called a deep Wishart process (DWP).
no code implementations • 30 Aug 2021 • Adam X. Yang, Maxime Robeyns, Edward Milsom, Ben Anson, Nandi Schoots, Laurence Aitchison
In particular, we show that Deep Gaussian processes (DGPs) in the Bayesian representation learning limit have exactly multivariate Gaussian posteriors, and the posterior covariances can be obtained by optimizing an interpretable objective combining a log-likelihood to improve performance with a series of KL-divergences which keep the posteriors close to the prior.