Bayesian Optimisation
84 papers with code • 0 benchmarks • 0 datasets
Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.
Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions
Benchmarks
These leaderboards are used to track progress in Bayesian Optimisation
Libraries
Use these libraries to find Bayesian Optimisation models and implementationsMost implemented papers
End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
Cheetah: Bridging the Gap Between Machine Learning and Particle Accelerator Physics with High-Speed, Differentiable Simulations
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics.
Theoretical Analysis of Bayesian Optimisation with Unknown Gaussian Process Hyper-Parameters
Bayesian optimisation has gained great popularity as a tool for optimising the parameters of machine learning algorithms and models.
Batch Bayesian Optimization via Local Penalization
The approach assumes that the function of interest, $f$, is a Lipschitz continuous function.
Asynchronous Parallel Bayesian Optimisation via Thompson Sampling
We design and analyse variations of the classical Thompson sampling (TS) procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive, but can be performed in parallel.
Generalising Random Forest Parameter Optimisation to Include Stability and Cost
We argue that error reduction is only one of several metrics that must be considered when optimizing random forest parameters for commercial applications.
Fast Information-theoretic Bayesian Optimisation
Information-theoretic Bayesian optimisation techniques have demonstrated state-of-the-art performance in tackling important global optimisation problems.
GPflowOpt: A Bayesian Optimization Library using TensorFlow
A novel Python framework for Bayesian optimization known as GPflowOpt is introduced.
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
Hyperparameter Learning via Distributional Transfer
Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial exploration even in cases where similar prior tasks have been solved.