Switching between Numerical Black-box Optimization Algorithms with Warm-starting Policies

13 Apr 2022  ·  Dominik Schröder, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck ·

When solving optimization problems with black-box approaches, the algorithms gather valuable information about the problem instance during the optimization process. This information is used to adjust the distributions from which new solution candidates are sampled. In fact, a key objective in evolutionary computation is to identify the most effective ways to collect and exploit instance knowledge. However, while considerable work is devoted to adjusting hyper-parameters of black-box optimization algorithms on the fly or exchanging some of its modular components, we barely know how to effectively switch between different black-box optimization algorithms. In this work, we build on the recent study of Vermetten et al. [GECCO 2020], who presented a data-driven approach to investigate promising switches between pairs of algorithms for numerical black-box optimization. We replicate their approach with a portfolio of five algorithms and investigate whether the predicted performance gains are realized when executing the most promising switches. Our results suggest that with a single switch between two algorithms, we outperform the best static choice among the five algorithms on 48 out of the 120 considered problem instances, the 24 BBOB functions in five different dimensions. We also show that for switching between BFGS and CMA-ES, a proper warm-starting of the parameters is crucial to realize high-performance gains. Lastly, with a sensitivity analysis, we find the actual performance gain per run is largely affected by the switching point, and in some cases, the switching point yielding the best actual performance differs from the one computed from the theoretical gain.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here