Particularly, we show that the number of runs used in many benchmarking studies, e. g., the default value of 15 suggested by the COCO environment, can be insufficient to reliably rank algorithms on well-known numerical optimization benchmarks.
In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase.
In addition, we have shown that by using classifiers that take the features relevance on the model accuracy, we are able to predict the status of individual modules in the CMA-ES configurations.
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations.
Lastly, with a sensitivity analysis, we find the actual performance gain is hugely affected by the switching point, and in some cases, the switching point yielding the best actual performance differs from the one computed from the theoretical gain.
Here, we demonstrate that, at least in algorithms based on Differential Evolution, this choice induces notably different behaviours - in terms of performance, disruptiveness and population diversity.
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization.
We find that anisotropy is very rare, and even in cases where it is present, there are clear tests for SB which do not rely on any assumptions of isotropy, so we can safely expand the suite of SB tests to encompass these kinds of deficiencies not found by the original tests.
Many platforms for benchmarking optimization algorithms offer users the possibility of sharing their experimental data with the purpose of promoting reproducible and reusable research.
However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task.
1 code implementation • 15 Dec 2020 • Noor Awad, Gresa Shala, Difan Deng, Neeratyoy Mallik, Matthias Feurer, Katharina Eggensperger, Andre' Biedenkapp, Diederick Vermetten, Hao Wang, Carola Doerr, Marius Lindauer, Frank Hutter
In this short note, we describe our submission to the NeurIPS 2020 BBO challenge.
An R programming interface is provided for users preferring to have a finer control over the implemented functionalities.
One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem.
In this work we compare sequential and integrated algorithm selection and configuration approaches for the case of selecting and tuning the best out of 4608 variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) tested on the Black Box Optimization Benchmark (BBOB) suite.
An analysis of module activation indicates which modules are most crucial for the different phases of optimizing each of the 24 benchmark problems.