Search Results for author: David R. Burt

Found 15 papers, 9 papers with code

Consistent Validation for Predictive Methods in Spatial Settings

1 code implementation5 Feb 2024 David R. Burt, Yunyi Shen, Tamara Broderick

Unfortunately, classical approaches for validation fail to handle mismatch between locations available for validation and (test) locations where we want to make predictions.

Weather Forecasting

Gaussian processes at the Helm(holtz): A more fluid model for ocean currents

1 code implementation20 Feb 2023 Renato Berlinghieri, Brian L. Trippe, David R. Burt, Ryan Giordano, Kaushik Srinivasan, Tamay Özgökmen, Junfei Xia, Tamara Broderick

Given sparse observations of buoy velocities, oceanographers are interested in reconstructing ocean currents away from the buoys and identifying divergences in a current vector field.

Gaussian Processes

Sparse Gaussian Process Hyperparameters: Optimize or Integrate?

no code implementations4 Nov 2022 Vidhi Lalchand, Wessel P. Bruinsma, David R. Burt, Carl E. Rasmussen

In this work we propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior within the variational inducing point framework of Titsias (2009).

Model Selection

Numerically Stable Sparse Gaussian Processes via Minimum Separation using Cover Trees

1 code implementation14 Oct 2022 Alexander Terenin, David R. Burt, Artem Artemev, Seth Flaxman, Mark van der Wilk, Carl Edward Rasmussen, Hong Ge

For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.

Bayesian Optimization Decision Making +1

A Note on the Chernoff Bound for Random Variables in the Unit Interval

no code implementations15 May 2022 Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt

The Chernoff bound is a well-known tool for obtaining a high probability bound on the expectation of a Bernoulli random variable in terms of its sample average.

Learning Theory

Wide Mean-Field Bayesian Neural Networks Ignore the Data

1 code implementation23 Feb 2022 Beau Coker, Wessel P. Bruinsma, David R. Burt, Weiwei Pan, Finale Doshi-Velez

Finally, we show that the optimal approximate posterior need not tend to the prior if the activation function is not odd, showing that our statements cannot be generalized arbitrarily.

Variational Inference

Barely Biased Learning for Gaussian Process Regression

no code implementations NeurIPS Workshop ICBINB 2021 David R. Burt, Artem Artemev, Mark van der Wilk

We suggest a method that adaptively selects the amount of computation to use when estimating the log marginal likelihood so that the bias of the objective function is guaranteed to be small.

regression

How Tight Can PAC-Bayes be in the Small Data Regime?

1 code implementation NeurIPS 2021 Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt, Richard E. Turner

Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.

Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients

no code implementations16 Feb 2021 Artem Artemev, David R. Burt, Mark van der Wilk

We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix.

Gaussian Processes regression

Understanding Variational Inference in Function-Space

2 code implementations pproximateinference AABI Symposium 2021 David R. Burt, Sebastian W. Ober, Adrià Garriga-Alonso, Mark van der Wilk

Then, we propose (featurized) Bayesian linear regression as a benchmark for `function-space' inference methods that directly measures approximation quality.

Bayesian Inference Variational Inference

Convergence of Sparse Variational Inference in Gaussian Processes Regression

1 code implementation1 Aug 2020 David R. Burt, Carl Edward Rasmussen, Mark van der Wilk

Gaussian processes are distributions over functions that are versatile and mathematically convenient priors in Bayesian modelling.

Gaussian Processes regression +1

Variational Orthogonal Features

no code implementations23 Jun 2020 David R. Burt, Carl Edward Rasmussen, Mark van der Wilk

We present a construction of features for any stationary prior kernel that allow for computation of an unbiased estimator to the ELBO using $T$ Monte Carlo samples in $\mathcal{O}(\tilde{N}T+M^2T)$ and in $\mathcal{O}(\tilde{N}T+MT)$ with an additional approximation.

Variational Inference

Bandit optimisation of functions in the Matérn kernel RKHS

no code implementations28 Jan 2020 David Janz, David R. Burt, Javier González

We consider the problem of optimising functions in the reproducing kernel Hilbert space (RKHS) of a Mat\'ern kernel with smoothness parameter $\nu$ over the domain $[0, 1]^d$ under noisy bandit feedback.

On the Expressiveness of Approximate Inference in Bayesian Neural Networks

2 code implementations NeurIPS 2020 Andrew Y. K. Foong, David R. Burt, Yingzhen Li, Richard E. Turner

While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood.

Active Learning Bayesian Inference +3

Rates of Convergence for Sparse Variational Gaussian Process Regression

1 code implementation8 Mar 2019 David R. Burt, Carl E. Rasmussen, Mark van der Wilk

Excellent variational approximations to Gaussian process posteriors have been developed which avoid the $\mathcal{O}\left(N^3\right)$ scaling with dataset size $N$.

Continual Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.