Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, molecular chemistry, and experimental design.
We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO.
A novel Python framework for Bayesian optimization known as GPflowOpt is introduced.
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
We concentrate on one such model, the variational auto-encoder, which we argue is an important building block in hierarchical probabilistic models of language.
Dynamic paired comparison models, such as Elo and Glicko, are frequently used for sports prediction and ranking players or teams.
We consider the problem of approximate Bayesian parameter inference in non-linear state-space models with intractable likelihoods.