2 code implementations • 10 Jul 2024 • Peter Mostowsky, Vincent Dutordoir, Iskander Azangulov, Noémie Jaquier, Michael John Hutchinson, Aditya Ravuri, Leonel Rozo, Alexander Terenin, Viacheslav Borovitskiy
To address this difficulty, we present GeometricKernels, a software package which implements the geometric analogs of classical Euclidean squared exponential - also known as heat - and Mat\'ern kernels, which are widely-used in settings where uncertainty is of key interest.
1 code implementation • 3 Jun 2024 • Alexander Denker, Francisco Vargas, Shreyas Padhy, Kieran Didi, Simon Mathis, Vincent Dutordoir, Riccardo Barbano, Emile Mathieu, Urszula Julia Komorowska, Pietro Lio
In this work, we unify conditional training and sampling using the mathematically well-understood Doob's h-transform.
no code implementations • 14 Dec 2023 • Kieran Didi, Francisco Vargas, Simon V Mathis, Vincent Dutordoir, Emile Mathieu, Urszula J Komorowska, Pietro Lio
Many protein design applications, such as binder or enzyme design, require scaffolding a structural motif with high precision.
1 code implementation • NeurIPS 2023 • Emile Mathieu, Vincent Dutordoir, Michael J. Hutchinson, Valentin De Bortoli, Yee Whye Teh, Richard E. Turner
In this work, we extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
no code implementations • 27 Apr 2023 • Louis C. Tiao, Vincent Dutordoir, Victor Picheny
Despite their many desirable properties, Gaussian processes (GPs) are often compared unfavorably to deep neural networks (NNs) for lacking the ability to learn representations.
1 code implementation • 6 Feb 2023 • Tim Genewein, Grégoire Delétang, Anian Ruoss, Li Kevin Wenliang, Elliot Catt, Vincent Dutordoir, Jordi Grau-Moya, Laurent Orseau, Marcus Hutter, Joel Veness
Memory-based meta-learning is a technique for approximating Bayes-optimal predictors.
1 code implementation • 8 Jun 2022 • Vincent Dutordoir, Alan Saul, Zoubin Ghahramani, Fergus Simpson
Neural network approaches for meta-learning distributions over functions have desirable properties such as increased flexibility and a reduced complexity of inference.
no code implementations • NeurIPS 2021 • Vincent Dutordoir, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, Nicolas Durrande
This results in models that can either be seen as neural networks with improved uncertainty prediction or deep Gaussian processes with increased prediction accuracy.
1 code implementation • 12 Apr 2021 • Vincent Dutordoir, Hugh Salimbeni, Eric Hambro, John McLeod, Felix Leibfried, Artem Artemev, Mark van der Wilk, James Hensman, Marc P. Deisenroth, ST John
GPflux is compatible with and built on top of the Keras deep learning eco-system.
no code implementations • 27 Dec 2020 • Felix Leibfried, Vincent Dutordoir, ST John, Nicolas Durrande
In this context, a convenient choice for approximate inference is variational inference (VI), where the problem of Bayesian inference is cast as an optimization problem -- namely, to maximize a lower bound of the log marginal likelihood.
no code implementations • ICML 2020 • Vincent Dutordoir, Nicolas Durrande, James Hensman
We introduce a new class of inter-domain variational Gaussian processes (GP) where data is mapped onto the unit hypersphere in order to use spherical harmonic representations.
no code implementations • 25 Jun 2020 • Victor Picheny, Vincent Dutordoir, Artem Artemev, Nicolas Durrande
Many machine learning models require a training procedure based on running stochastic gradient descent.
no code implementations • NeurIPS 2021 • Sattar Vakili, Henry Moss, Artem Artemev, Vincent Dutordoir, Victor Picheny
We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS.
1 code implementation • 2 Mar 2020 • Mark van der Wilk, Vincent Dutordoir, ST John, Artem Artemev, Vincent Adam, James Hensman
One obstacle to the use of Gaussian processes (GPs) in large-scale problems, and as a component in deep learning system, is the need for bespoke derivations and implementations for small variations in the model or inference.
1 code implementation • 14 May 2019 • Hugh Salimbeni, Vincent Dutordoir, James Hensman, Marc Peter Deisenroth
Deep Gaussian processes (DGPs) can model complex marginal densities as well as complex mappings.
no code implementations • 15 Feb 2019 • Vincent Dutordoir, Mark van der Wilk, Artem Artemev, James Hensman
We also demonstrate that our fully Bayesian approach improves on dropout-based Bayesian deep learning methods in terms of uncertainty and marginal likelihood estimates.
no code implementations • NeurIPS 2018 • Vincent Dutordoir, Hugh Salimbeni, Marc Deisenroth, James Hensman
Conditional Density Estimation (CDE) models deal with estimating conditional distributions.