1 code implementation • 8 Jun 2022 • Vincent Dutordoir, Alan Saul, Zoubin Ghahramani, Fergus Simpson
Neural network approaches for meta-learning distributions over functions have desirable properties such as increased flexibility and a reduced complexity of inference.
no code implementations • 27 Oct 2021 • John McLeod, Fergus Simpson
Gaussian process priors are a popular choice for Bayesian analysis of regression problems.
1 code implementation • NeurIPS 2021 • Fergus Simpson, Ian Davies, Vidhi Lalchand, Alessandro Vullo, Nicolas Durrande, Carl Rasmussen
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models, as the chosen kernel determines both the inductive biases and prior support of functions under the GP prior.
no code implementations • 11 Mar 2021 • Fergus Simpson, Alexis Boukouvalas, Vaclav Cadek, Elvijs Sarkans, Nicolas Durrande
In the univariate setting, using the kernel spectral representation is an appealing approach for generating stationary covariance functions.
no code implementations • pproximateinference AABI Symposium 2021 • Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
Gaussian Process (GPs) models are a rich distribution over functions with inductive biases controlled by a kernel function.
1 code implementation • NeurIPS 2021 • Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
Learning occurs through the optimisation of kernel hyperparameters using the marginal likelihood as the objective.