no code implementations • 30 Mar 2024 • Anna Vaughan, Stratis Markou, Will Tebbutt, James Requeima, Wessel P. Bruinsma, Tom R. Andersson, Michael Herzog, Nicholas D. Lane, J. Scott Hosking, Richard E. Turner
Machine learning is revolutionising medium-range weather prediction.
1 code implementation • 25 Mar 2023 • Wessel P. Bruinsma, Stratis Markou, James Requiema, Andrew Y. K. Foong, Tom R. Andersson, Anna Vaughan, Anthony Buonomo, J. Scott Hosking, Richard E. Turner
Our work provides an example of how ideas from neural distribution estimation can benefit neural processes, and motivates research into the AR deployment of other neural process models.
1 code implementation • 18 Nov 2022 • Tom R. Andersson, Wessel P. Bruinsma, Stratis Markou, James Requeima, Alejandro Coca-Castro, Anna Vaughan, Anna-Louise Ellis, Matthew A. Lazzara, Dani Jones, J. Scott Hosking, Richard E. Turner
This paper proposes using a convolutional Gaussian neural process (ConvGNP) to address these issues.
no code implementations • 4 Nov 2022 • Vidhi Lalchand, Wessel P. Bruinsma, David R. Burt, Carl E. Rasmussen
In this work we propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior within the variational inducing point framework of Titsias (2009).
no code implementations • 15 May 2022 • Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt
The Chernoff bound is a well-known tool for obtaining a high probability bound on the expectation of a Bernoulli random variable in terms of its sample average.
no code implementations • 16 Mar 2022 • Stratis Markou, James Requeima, Wessel P. Bruinsma, Anna Vaughan, Richard E. Turner
Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018b) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.
1 code implementation • 14 Mar 2022 • Wessel P. Bruinsma, Martin Tegnér, Richard E. Turner
The Gaussian Process Convolution Model (GPCM; Tobar et al., 2015a) is a model for signals with complex spectral structure.
1 code implementation • 23 Feb 2022 • Beau Coker, Wessel P. Bruinsma, David R. Burt, Weiwei Pan, Finale Doshi-Velez
Finally, we show that the optimal approximate posterior need not tend to the prior if the activation function is not odd, showing that our statements cannot be generalized arbitrarily.
1 code implementation • NeurIPS 2021 • Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt, Richard E. Turner
Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.
1 code implementation • pproximateinference AABI Symposium 2021 • Wessel P. Bruinsma, James Requeima, Andrew Y. K. Foong, Jonathan Gordon, Richard E. Turner
Neural Processes (NPs; Garnelo et al., 2018a, b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes.
2 code implementations • NeurIPS 2020 • Andrew Y. K. Foong, Wessel P. Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, Richard E. Turner
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data.
1 code implementation • ICML 2020 • Wessel P. Bruinsma, Eric Perim, Will Tebbutt, J. Scott Hosking, Arno Solin, Richard E. Turner
Multi-output Gaussian processes (MOGPs) leverage the flexibility and interpretability of GPs while capturing structure across outputs, which is desirable, for example, in spatio-temporal modelling.
3 code implementations • ICLR 2020 • Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner
We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.