Practical Transfer Learning for Bayesian Optimization

6 Feb 2018  ·  Matthias Feurer, Benjamin Letham, Frank Hutter, Eytan Bakshy ·

Bayesian optimization has become a standard technique for hyperparameter optimization of machine learning algorithms. We consider the setting where previous optimization runs are available, and we wish to transfer their outcomes to a new optimization run and thereby accelerate the search... We develop a new hyperparameter-free ensemble model for Bayesian optimization, based on a linear combination of Gaussian Processes and Agnostic Bayesian Learning of Ensembles. We show that this is a generalization of two existing transfer learning extensions to Bayesian optimization and establish a worst-case bound compared to vanilla Bayesian optimization. Using a large collection of hyperparameter optimization benchmark problems, we demonstrate that our contributions substantially reduce optimization time compared to standard Gaussian process-based Bayesian optimization and improve over the current state-of-the-art for warm-starting Bayesian optimization. read more

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.