Bayesian Optimisation
88 papers with code • 0 benchmarks • 0 datasets
Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.
Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions
Benchmarks
These leaderboards are used to track progress in Bayesian Optimisation
Libraries
Use these libraries to find Bayesian Optimisation models and implementationsLatest papers with no code
MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning
In Bayesian optimisation, we often seek to minimise the black-box objective functions that arise in real-world physical systems.
Delayed Feedback in Kernel Bandits
An abstraction of the problem can be formulated as a kernel based bandit problem (also known as Bayesian optimisation), where a learner aims at optimising a kernelized function through sequential noisy observations.
Intrinsic Bayesian Optimisation on Complex Constrained Domain
Motivated by the success of Bayesian optimisation algorithms in the Euclidean space, we propose a novel approach to construct Intrinsic Bayesian optimisation (In-BO) on manifolds with a primary focus on complex constrained domains or irregular-shaped spaces arising as submanifolds of R2, R3 and beyond.
Contextual Causal Bayesian Optimisation
Causal Bayesian optimisation (CaBO) combines causality with Bayesian optimisation (BO) and shows that there are situations where the optimal reward is not achievable if causal knowledge is ignored.
Inducing Point Allocation for Sparse Gaussian Processes in High-Throughput Bayesian Optimisation
Sparse Gaussian Processes are a key component of high-throughput Bayesian Optimisation (BO) loops; however, we show that existing methods for allocating their inducing points severely hamper optimisation performance.
Cell-Free Data Power Control Via Scalable Multi-Objective Bayesian Optimisation
Cell-free multi-user multiple input multiple output networks are a promising alternative to classical cellular architectures, since they have the potential to provide uniform service quality and high resource utilisation over the entire coverage area of the network.
Bayesian learning of feature spaces for multitasks problems
This paper introduces a novel approach for multi-task regression that connects Kernel Machines (KMs) and Extreme Learning Machines (ELMs) through the exploitation of the Random Fourier Features (RFFs) approximation of the RBF kernel.
Nonstationary Continuum-Armed Bandit Strategies for Automated Trading in a Simulated Financial Market
We approach the problem of designing an automated trading strategy that can consistently profit by adapting to changing market conditions.
A Two-Stage Bayesian Optimisation for Automatic Tuning of an Unscented Kalman Filter for Vehicle Sideslip Angle Estimation
This paper presents a novel methodology to auto-tune an Unscented Kalman Filter (UKF).
Information-theoretic Inducing Point Placement for High-throughput Bayesian Optimisation
By choosing inducing points to maximally reduce both global uncertainty and uncertainty in the maximum value of the objective function, we build surrogate models able to support high-precision high-throughput BO.