We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO.
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
We consider the problem of approximate Bayesian parameter inference in non-linear state-space models with intractable likelihoods.
We argue that error reduction is only one of several metrics that must be considered when optimizing random forest parameters for commercial applications.
Batch Bayesian optimisation (BO) has been successfully applied to hyperparameter tuning using parallel computing, but it is wasteful of resources: workers that complete jobs ahead of others are left idle.
Bayesian experimental design involves the optimal allocation of resources in an experiment, with the aim of optimising cost and performance.