Search Results for author: Carl Edward Rasmussen

Found 24 papers, 12 papers with code

Integrated Variational Fourier Features for Fast Spatial Modelling with Gaussian Processes

no code implementations27 Aug 2023 Talay M Cheema, Carl Edward Rasmussen

Sparse variational approximations are popular methods for scaling up inference and learning in Gaussian processes to larger datasets.

Gaussian Processes

Numerically Stable Sparse Gaussian Processes via Minimum Separation using Cover Trees

1 code implementation14 Oct 2022 Alexander Terenin, David R. Burt, Artem Artemev, Seth Flaxman, Mark van der Wilk, Carl Edward Rasmussen, Hong Ge

For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.

Bayesian Optimization Decision Making +1

Clipping Loops for Sample-Efficient Dialogue Policy Optimisation

no code implementations NAACL 2021 Yen-chen Wu, Carl Edward Rasmussen

Second, in advantage clipping, we estimate and clip the advantages of useless responses and normal ones separately.

Marginalised Gaussian Processes with Nested Sampling

1 code implementation NeurIPS 2021 Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen

Learning occurs through the optimisation of kernel hyperparameters using the marginal likelihood as the objective.

Gaussian Processes

Convergence of Sparse Variational Inference in Gaussian Processes Regression

1 code implementation1 Aug 2020 David R. Burt, Carl Edward Rasmussen, Mark van der Wilk

Gaussian processes are distributions over functions that are versatile and mathematically convenient priors in Bayesian modelling.

Gaussian Processes regression +1

Variational Orthogonal Features

no code implementations23 Jun 2020 David R. Burt, Carl Edward Rasmussen, Mark van der Wilk

We present a construction of features for any stationary prior kernel that allow for computation of an unbiased estimator to the ELBO using $T$ Monte Carlo samples in $\mathcal{O}(\tilde{N}T+M^2T)$ and in $\mathcal{O}(\tilde{N}T+MT)$ with an additional approximation.

Variational Inference

Approximate Inference for Fully Bayesian Gaussian Process Regression

1 code implementation pproximateinference AABI Symposium 2019 Vidhi Lalchand, Carl Edward Rasmussen

An alternative learning procedure is to infer the posterior over hyperparameters in a hierarchical specification of GPs we call \textit{Fully Bayesian Gaussian Process Regression} (GPR).

GPR regression +1

Benchmarking the Neural Linear Model for Regression

no code implementations pproximateinference AABI Symposium 2019 Sebastian W. Ober, Carl Edward Rasmussen

The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.

Bayesian Optimization Benchmarking +3

Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

1 code implementation13 Jun 2019 Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen

As we demonstrate in our experiments, the factorisation between latent system states and transition function can lead to a miscalibrated posterior and to learning unnecessarily large noise terms.

Variational Inference

PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos

3 code implementations ICML 2018 Paavo Parmas, Carl Edward Rasmussen, Jan Peters, Kenji Doya

Previously, the exploding gradient problem has been explained to be central in deep learning and model-based reinforcement learning, because it causes numerical issues and instability in optimization.

Model-based Reinforcement Learning reinforcement-learning +1

Non-Factorised Variational Inference in Dynamical Systems

no code implementations14 Dec 2018 Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen

We focus on variational inference in dynamical systems where the discrete time transition function (or evolution rule) is modelled by a Gaussian process.

Variational Inference

Closed-form Inference and Prediction in Gaussian Process State-Space Models

no code implementations10 Dec 2018 Alessandro Davide Ialongo, Mark van der Wilk, Carl Edward Rasmussen

We examine an analytic variational inference scheme for the Gaussian Process State Space Model (GPSSM) - a probabilistic model for system identification and time-series modelling.

Time Series Time Series Analysis +1

Deep Convolutional Networks as shallow Gaussian Processes

3 code implementations ICLR 2019 Adrià Garriga-Alonso, Carl Edward Rasmussen, Laurence Aitchison

For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN.

Gaussian Processes General Classification

Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs

no code implementations NeurIPS 2017 Rowan Mcallister, Carl Edward Rasmussen

This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm.

reinforcement-learning Reinforcement Learning (RL)

Convolutional Gaussian Processes

4 code implementations NeurIPS 2017 Mark van der Wilk, Carl Edward Rasmussen, James Hensman

We present a practical way of introducing convolutional structure into Gaussian processes, making them more suited to high-dimensional inputs like images.

Gaussian Processes

Understanding Probabilistic Sparse Gaussian Process Approximations

no code implementations NeurIPS 2016 Matthias Bauer, Mark van der Wilk, Carl Edward Rasmussen

Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets.

Gaussian Processes regression

Data-Efficient Reinforcement Learning in Continuous-State POMDPs

no code implementations8 Feb 2016 Rowan McAllister, Carl Edward Rasmussen

We present a data-efficient reinforcement learning algorithm resistant to observation noise.

reinforcement-learning Reinforcement Learning (RL)

Gaussian Processes for Data-Efficient Learning in Robotics and Control

1 code implementation10 Feb 2015 Marc Peter Deisenroth, Dieter Fox, Carl Edward Rasmussen

Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required.

Gaussian Processes Reinforcement Learning (RL)

Manifold Gaussian Processes for Regression

1 code implementation24 Feb 2014 Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth

This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.

Gaussian Processes regression

Integrated Pre-Processing for Bayesian Nonlinear System Identification with Gaussian Processes

no code implementations12 Mar 2013 Roger Frigola, Carl Edward Rasmussen

We introduce GP-FNARX: a new model for nonlinear system identification based on a nonlinear autoregressive exogenous model (NARX) with filtered regressors (F) where the nonlinear regression problem is tackled using sparse Gaussian processes (GP).

Gaussian Processes regression

Robust Filtering and Smoothing with Gaussian Processes

no code implementations20 Mar 2012 Marc Peter Deisenroth, Ryan Turner, Marco F. Huber, Uwe D. Hanebeck, Carl Edward Rasmussen

We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.