no code implementations • 15 Feb 2024 • Sebastian W. Ober, Artem Artemev, Marcel Wagenländer, Rudolfs Grobins, Mark van der Wilk
To address this, we make recommendations for comparing GP approximations based on a specification of what a user should expect from a method.
no code implementations • 23 Jan 2024 • Sebastian W. Ober
We therefore explore three aspects of Bayesian learning for deep models: 1) we ask whether it is necessary to perform inference over as many parameters as possible, or whether it is reasonable to treat many of them as optimizable hyperparameters; 2) we propose a variational posterior that provides a unified view of inference in Bayesian neural networks and deep Gaussian processes; 3) we demonstrate how VI can be improved in certain deep Gaussian process models by analytically removing symmetries from the posterior, and performing inference on Gram matrices instead of features.
2 code implementations • 16 Feb 2023 • Victor Picheny, Joel Berkeley, Henry B. Moss, Hrvoje Stojic, Uri Granta, Sebastian W. Ober, Artem Artemev, Khurram Ghani, Alexander Goodall, Andrei Paleyes, Sattar Vakili, Sergio Pascual-Diaz, Stratis Markou, Jixiang Qing, Nasrulloh R. B. S Loka, Ivo Couckuyt
We present Trieste, an open-source Python package for Bayesian optimization and active learning benefiting from the scalability and efficiency of TensorFlow.
no code implementations • 24 Jan 2023 • Henry B. Moss, Sebastian W. Ober, Victor Picheny
Sparse Gaussian Processes are a key component of high-throughput Bayesian Optimisation (BO) loops; however, we show that existing methods for allocating their inducing points severely hamper optimisation performance.
no code implementations • 6 Jun 2022 • Henry B. Moss, Sebastian W. Ober, Victor Picheny
By choosing inducing points to maximally reduce both global uncertainty and uncertainty in the maximum value of the objective function, we build surrogate models able to support high-precision high-throughput BO.
1 code implementation • NeurIPS 2021 • Sebastian W. Ober, Laurence Aitchison
We develop a doubly-stochastic inducing-point inference scheme for the DWP and show experimentally that inference in the DWP can improve performance over doing inference in a DGP with the equivalent prior.
1 code implementation • 14 Jun 2021 • Pola Schwöbel, Martin Jørgensen, Sebastian W. Ober, Mark van der Wilk
Computing the marginal likelihood is hard for neural networks, but success with tractable approaches that compute the marginal likelihood for the last layer only raises the question of whether this convenient approach might be employed for learning invariances.
no code implementations • 24 Feb 2021 • Sebastian W. Ober, Carl E. Rasmussen, Mark van der Wilk
Through careful experimentation on the UCI, CIFAR-10, and the UTKFace datasets, we find that the overfitting from overparameterized maximum marginal likelihood, in which the model is "somewhat Bayesian", can in certain scenarios be worse than that from not being Bayesian at all.
1 code implementation • NeurIPS Workshop ICBINB 2020 • Vincent Fortuin, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, Laurence Aitchison
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference.
2 code implementations • pproximateinference AABI Symposium 2021 • David R. Burt, Sebastian W. Ober, Adrià Garriga-Alonso, Mark van der Wilk
Then, we propose (featurized) Bayesian linear regression as a benchmark for `function-space' inference methods that directly measures approximation quality.
no code implementations • 4 Oct 2020 • Laurence Aitchison, Adam X. Yang, Sebastian W. Ober
We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on standard fully-connected baselines.
1 code implementation • 17 May 2020 • Sebastian W. Ober, Laurence Aitchison
We consider the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and show that it exhibits strong dependencies on the lower-layer weights.
no code implementations • pproximateinference AABI Symposium 2019 • Sebastian W. Ober, Carl Edward Rasmussen
The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.