no code implementations • 6 Feb 2024 • Mickael Binois, Victor Picheny
Gaussian processes are a widely embraced technique for regression and classification due to their good prediction accuracy, analytical tractability and built-in capabilities for uncertainty quantification.
no code implementations • 27 Apr 2023 • Louis C. Tiao, Vincent Dutordoir, Victor Picheny
Despite their many desirable properties, Gaussian processes (GPs) are often compared unfavorably to deep neural networks (NNs) for lacking the ability to learn representations.
2 code implementations • 16 Feb 2023 • Victor Picheny, Joel Berkeley, Henry B. Moss, Hrvoje Stojic, Uri Granta, Sebastian W. Ober, Artem Artemev, Khurram Ghani, Alexander Goodall, Andrei Paleyes, Sattar Vakili, Sergio Pascual-Diaz, Stratis Markou, Jixiang Qing, Nasrulloh R. B. S Loka, Ivo Couckuyt
We present Trieste, an open-source Python package for Bayesian optimization and active learning benefiting from the scalability and efficiency of TensorFlow.
no code implementations • 24 Jan 2023 • Henry B. Moss, Sebastian W. Ober, Victor Picheny
Sparse Gaussian Processes are a key component of high-throughput Bayesian Optimisation (BO) loops; however, we show that existing methods for allocating their inducing points severely hamper optimisation performance.
no code implementations • 2 Nov 2022 • Paul E. Chang, Prakhar Verma, ST John, Victor Picheny, Henry Moss, Arno Solin
Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning.
1 code implementation • 27 Jun 2022 • Andrei Paleyes, Henry B. Moss, Victor Picheny, Piotr Zulawski, Felix Newman
We present HIghly Parallelisable Pareto Optimisation (HIPPO) -- a batch acquisition function that enables multi-objective Bayesian optimisation methods to efficiently exploit parallel processing resources.
no code implementations • 6 Jun 2022 • Henry B. Moss, Sebastian W. Ober, Victor Picheny
By choosing inducing points to maximally reduce both global uncertainty and uncertainty in the maximum value of the objective function, we build surrogate models able to support high-precision high-throughput BO.
1 code implementation • 30 Mar 2021 • Rodolphe Le Riche, Victor Picheny
It is commonly believed that Bayesian optimization (BO) algorithms are highly efficient for optimizing numerically costly functions.
no code implementations • 18 Jan 2021 • Youssef Diouane, Victor Picheny, Rodolphe Le Riche, Alexandre Scotto Di Perrotolo
By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps.
no code implementations • 15 Sep 2020 • Sattar Vakili, Kia Khezeli, Victor Picheny
For the Mat\'ern family of kernels, where the lower bounds on $\gamma_T$, and regret under the frequentist setting, are known, our results close a huge polynomial in $T$ gap between the upper and lower bounds (up to logarithmic in $T$ factors).
no code implementations • 25 Jun 2020 • Victor Picheny, Vincent Dutordoir, Artem Artemev, Nicolas Durrande
Many machine learning models require a training procedure based on running stochastic gradient descent.
no code implementations • NeurIPS 2021 • Sattar Vakili, Henry Moss, Artem Artemev, Vincent Dutordoir, Victor Picheny
We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS.
no code implementations • 12 Jan 2020 • Victor Picheny, Henry Moss, Léonard Torossian, Nicolas Durrande
In this paper, we propose new variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic noise settings.
no code implementations • 5 Dec 2019 • Victor Picheny, Sattar Vakili, Artem Artemev
Bayesian optimisation is a powerful tool to solve expensive black-box problems, but fails when the stationary assumption made on the objective function is strongly violated, which is the case in particular for ill-conditioned or discontinuous objectives.
no code implementations • 29 Aug 2019 • David Gaudrie, Rodolphe Le Riche, Victor Picheny, Benoit Enaux, Vincent Herbert
Parametric shape optimization aims at minimizing an objective function f(x) where x are CAD parameters.
no code implementations • 17 Apr 2019 • Léonard Torossian, Aurélien Garivier, Victor Picheny
We finally present numerical experiments that show a dramatic impact of tight bounds for the optimization of quantiles and CVaR.
no code implementations • 18 Feb 2019 • Mickaël Binois, Victor Picheny, Patrick Taillandier, Abderrahmane Habbal
An ongoing aim of research in multiobjective Bayesian optimization is to extend its applicability to a large number of objectives.
no code implementations • 23 Jan 2019 • Léonard Torossian, Victor Picheny, Robert Faivre, Aurélien Garivier
We report on an empirical study of the main strategies for quantile regression in the context of stochastic computer experiments.
no code implementations • 9 Nov 2018 • David Gaudrie, Rodolphe Le Riche, Victor Picheny, Benoit Enaux, Vincent Herbert
Multi-objective optimization aims at finding trade-off solutions to conflicting objectives.
no code implementations • 27 Sep 2018 • David Gaudrie, Rodolphe Le Riche, Victor Picheny, Benoit Enaux, Vincent Herbert
When the number of experiments is severely restricted and/or when the number of objectives increases, uncovering the whole set of Pareto optimal solutions is out of reach, even for surrogate-based approaches: the proposed solutions are sub-optimal or do not cover the front well.
no code implementations • 8 Nov 2016 • Victor Picheny, Mickael Binois, Abderrahmane Habbal
Game theory finds nowadays a broad range of applications in engineering and machine learning.
no code implementations • NeurIPS 2016 • Victor Picheny, Robert B. Gramacy, Stefan M. Wild, Sebastien Le Digabel
An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e. g., unconstrained) problems, which are then usually solved with local solvers.