Search Results for author: Sebastian W. Ober

Found 13 papers, 6 papers with code

Recommendations for Baselines and Benchmarking Approximate Gaussian Processes

no code implementations15 Feb 2024 Sebastian W. Ober, Artem Artemev, Marcel Wagenländer, Rudolfs Grobins, Mark van der Wilk

To address this, we make recommendations for comparing GP approximations based on a specification of what a user should expect from a method.

Benchmarking Gaussian Processes

Towards Improved Variational Inference for Deep Bayesian Models

no code implementations23 Jan 2024 Sebastian W. Ober

We therefore explore three aspects of Bayesian learning for deep models: 1) we ask whether it is necessary to perform inference over as many parameters as possible, or whether it is reasonable to treat many of them as optimizable hyperparameters; 2) we propose a variational posterior that provides a unified view of inference in Bayesian neural networks and deep Gaussian processes; 3) we demonstrate how VI can be improved in certain deep Gaussian process models by analytically removing symmetries from the posterior, and performing inference on Gram matrices instead of features.

Gaussian Processes Model Selection +1

Inducing Point Allocation for Sparse Gaussian Processes in High-Throughput Bayesian Optimisation

no code implementations24 Jan 2023 Henry B. Moss, Sebastian W. Ober, Victor Picheny

Sparse Gaussian Processes are a key component of high-throughput Bayesian Optimisation (BO) loops; however, we show that existing methods for allocating their inducing points severely hamper optimisation performance.

Bayesian Optimisation Decision Making +2

Information-theoretic Inducing Point Placement for High-throughput Bayesian Optimisation

no code implementations6 Jun 2022 Henry B. Moss, Sebastian W. Ober, Victor Picheny

By choosing inducing points to maximally reduce both global uncertainty and uncertainty in the maximum value of the objective function, we build surrogate models able to support high-precision high-throughput BO.

Bayesian Optimisation Gaussian Processes +1

A variational approximate posterior for the deep Wishart process

1 code implementation NeurIPS 2021 Sebastian W. Ober, Laurence Aitchison

We develop a doubly-stochastic inducing-point inference scheme for the DWP and show experimentally that inference in the DWP can improve performance over doing inference in a DGP with the equivalent prior.

Last Layer Marginal Likelihood for Invariance Learning

1 code implementation14 Jun 2021 Pola Schwöbel, Martin Jørgensen, Sebastian W. Ober, Mark van der Wilk

Computing the marginal likelihood is hard for neural networks, but success with tractable approaches that compute the marginal likelihood for the last layer only raises the question of whether this convenient approach might be employed for learning invariances.

Data Augmentation Gaussian Processes +1

The Promises and Pitfalls of Deep Kernel Learning

no code implementations24 Feb 2021 Sebastian W. Ober, Carl E. Rasmussen, Mark van der Wilk

Through careful experimentation on the UCI, CIFAR-10, and the UTKFace datasets, we find that the overfitting from overparameterized maximum marginal likelihood, in which the model is "somewhat Bayesian", can in certain scenarios be worse than that from not being Bayesian at all.

Gaussian Processes

Understanding Variational Inference in Function-Space

2 code implementations pproximateinference AABI Symposium 2021 David R. Burt, Sebastian W. Ober, Adrià Garriga-Alonso, Mark van der Wilk

Then, we propose (featurized) Bayesian linear regression as a benchmark for `function-space' inference methods that directly measures approximation quality.

Bayesian Inference Variational Inference

Deep kernel processes

no code implementations4 Oct 2020 Laurence Aitchison, Adam X. Yang, Sebastian W. Ober

We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on standard fully-connected baselines.

Gaussian Processes Variational Inference

Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes

1 code implementation17 May 2020 Sebastian W. Ober, Laurence Aitchison

We consider the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and show that it exhibits strong dependencies on the lower-layer weights.

Data Augmentation Gaussian Processes

Benchmarking the Neural Linear Model for Regression

no code implementations pproximateinference AABI Symposium 2019 Sebastian W. Ober, Carl Edward Rasmussen

The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.

Bayesian Optimization Benchmarking +3

Cannot find the paper you are looking for? You can Submit a new open access paper.