no code implementations • 27 Aug 2023 • Talay M Cheema, Carl Edward Rasmussen
Sparse variational approximations are popular methods for scaling up inference and learning in Gaussian processes to larger datasets.
1 code implementation • 14 Oct 2022 • Alexander Terenin, David R. Burt, Artem Artemev, Seth Flaxman, Mark van der Wilk, Carl Edward Rasmussen, Hong Ge
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
no code implementations • NAACL 2021 • Yen-chen Wu, Carl Edward Rasmussen
Second, in advantage clipping, we estimate and clip the advantages of useless responses and normal ones separately.
no code implementations • pproximateinference AABI Symposium 2021 • Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
Gaussian Process (GPs) models are a rich distribution over functions with inductive biases controlled by a kernel function.
1 code implementation • NeurIPS 2021 • Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
Learning occurs through the optimisation of kernel hyperparameters using the marginal likelihood as the objective.
1 code implementation • NeurIPS 2020 • Ushnish Sengupta, Matt Amos, J. Scott Hosking, Carl Edward Rasmussen, Matthew Juniper, Paul J. Young
Ensembles of geophysical models improve projection accuracy and express uncertainties.
1 code implementation • 1 Aug 2020 • David R. Burt, Carl Edward Rasmussen, Mark van der Wilk
Gaussian processes are distributions over functions that are versatile and mathematically convenient priors in Bayesian modelling.
no code implementations • 23 Jun 2020 • David R. Burt, Carl Edward Rasmussen, Mark van der Wilk
We present a construction of features for any stationary prior kernel that allow for computation of an unbiased estimator to the ELBO using $T$ Monte Carlo samples in $\mathcal{O}(\tilde{N}T+M^2T)$ and in $\mathcal{O}(\tilde{N}T+MT)$ with an additional approximation.
1 code implementation • pproximateinference AABI Symposium 2019 • Vidhi Lalchand, Carl Edward Rasmussen
An alternative learning procedure is to infer the posterior over hyperparameters in a hierarchical specification of GPs we call \textit{Fully Bayesian Gaussian Process Regression} (GPR).
no code implementations • pproximateinference AABI Symposium 2019 • Sebastian W. Ober, Carl Edward Rasmussen
The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.
1 code implementation • 13 Jun 2019 • Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen
As we demonstrate in our experiments, the factorisation between latent system states and transition function can lead to a miscalibrated posterior and to learning unnecessarily large noise terms.
3 code implementations • ICML 2018 • Paavo Parmas, Carl Edward Rasmussen, Jan Peters, Kenji Doya
Previously, the exploding gradient problem has been explained to be central in deep learning and model-based reinforcement learning, because it causes numerical issues and instability in optimization.
Model-based Reinforcement Learning
reinforcement-learning
+2
no code implementations • 14 Dec 2018 • Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen
We focus on variational inference in dynamical systems where the discrete time transition function (or evolution rule) is modelled by a Gaussian process.
no code implementations • 10 Dec 2018 • Alessandro Davide Ialongo, Mark van der Wilk, Carl Edward Rasmussen
We examine an analytic variational inference scheme for the Gaussian Process State Space Model (GPSSM) - a probabilistic model for system identification and time-series modelling.
3 code implementations • ICLR 2019 • Adrià Garriga-Alonso, Carl Edward Rasmussen, Laurence Aitchison
For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN.
no code implementations • NeurIPS 2017 • Rowan Mcallister, Carl Edward Rasmussen
This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm.
4 code implementations • NeurIPS 2017 • Mark van der Wilk, Carl Edward Rasmussen, James Hensman
We present a practical way of introducing convolutional structure into Gaussian processes, making them more suited to high-dimensional inputs like images.
no code implementations • NeurIPS 2016 • Matthias Bauer, Mark van der Wilk, Carl Edward Rasmussen
Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets.
no code implementations • 8 Feb 2016 • Rowan McAllister, Carl Edward Rasmussen
We present a data-efficient reinforcement learning algorithm resistant to observation noise.
1 code implementation • 10 Feb 2015 • Marc Peter Deisenroth, Dieter Fox, Carl Edward Rasmussen
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required.
1 code implementation • 24 Feb 2014 • Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth
This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.
no code implementations • 12 Mar 2013 • Roger Frigola, Carl Edward Rasmussen
We introduce GP-FNARX: a new model for nonlinear system identification based on a nonlinear autoregressive exogenous model (NARX) with filtered regressors (F) where the nonlinear regression problem is tackled using sparse Gaussian processes (GP).
no code implementations • 20 Mar 2012 • Marc Peter Deisenroth, Ryan Turner, Marco F. Huber, Uwe D. Hanebeck, Carl Edward Rasmussen
We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models.
1 code implementation • NeurIPS 2011 • David Duvenaud, Hannes Nickisch, Carl Edward Rasmussen
We introduce a Gaussian process model of functions which are additive.