1 code implementation • 3 Jun 2023 • Prakhar Verma, Vincent Adam, Arno Solin
Diffusion processes are a class of stochastic differential equations (SDEs) providing a rich family of expressive models that arise naturally in dynamic modelling tasks.
1 code implementation • NeurIPS 2021 • Vincent Adam, Paul E. Chang, Mohammad Emtiyaz Khan, Arno Solin
Sparse variational Gaussian process (SVGP) methods are a common choice for non-conjugate Gaussian process inference because of their computational benefits.
no code implementations • NeurIPS Workshop DLDE 2021 • Prakhar Verma, Vincent Adam, Arno Solin
We frame the problem of learning stochastic differential equations (SDEs) from noisy observations as an inference problem and aim to maximize the marginal likelihood of the observations in a joint model of the latent paths and the noisy observations.
2 code implementations • 26 Mar 2021 • John McLeod, Hrvoje Stojic, Vincent Adam, Dongho Kim, Jordi Grau-Moya, Peter Vrancx, Felix Leibfried
This paves the way for new research directions, e. g. investigating uncertainty-aware environment models that are not necessarily neural-network-based, or developing algorithms to solve industrially-motivated benchmarks that share characteristics with real-world problems.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • 19 Mar 2021 • William J. Wilkinson, Arno Solin, Vincent Adam
Approximate Bayesian inference methods that scale to very large datasets are crucial in leveraging probabilistic models for real-world time series.
1 code implementation • 2 Mar 2020 • Mark van der Wilk, Vincent Dutordoir, ST John, Artem Artemev, Vincent Adam, James Hensman
One obstacle to the use of Gaussian processes (GPs) in large-scale problems, and as a component in deep learning system, is the need for bespoke derivations and implementations for small variations in the model or inference.
no code implementations • 15 Jan 2020 • Vincent Adam, Stefanos Eleftheriadis, Nicolas Durrande, Artem Artemev, James Hensman
The use of Gaussian process models is typically limited to datasets with a few tens of thousands of observations due to their complexity and memory footprint.
no code implementations • 21 Jun 2019 • Janith C. Petangoda, Sergio Pascual-Diaz, Vincent Adam, Peter Vrancx, Jordi Grau-Moya
We propose a novel framework for multi-task reinforcement learning (MTRL).
Hierarchical Reinforcement Learning reinforcement-learning +2
no code implementations • 26 Feb 2019 • Nicolas Durrande, Vincent Adam, Lucas Bordeaux, Stefanos Eleftheriadis, James Hensman
Banded matrices can be used as precision matrices in several models including linear state-space models, some Gaussian processes, and Gaussian Markov random fields.
no code implementations • 28 Dec 2018 • Vincent Adam, Nicolas Durrande, ST John
Generalized additive models (GAMs) are a widely used class of models of interest to statisticians as they provide a flexible way to design interpretable models of data beyond linear models.
no code implementations • ICLR 2019 • Laurence Aitchison, Vincent Adam, Srinivas C. Turaga
Each training step for a variational autoencoder (VAE) requires us to sample from the approximate posterior, so we usually choose simple (e. g. factorised) approximate posteriors in which sampling is an efficient computation that fully exploits GPU parallelism.
no code implementations • 3 Nov 2017 • Vincent Adam
Here, we extend previous sparse GP approximations and propose a novel parameterization of variational posteriors in the multi-GP setting allowing for fast and scalable inference capturing posterior dependencies.