no code implementations • 10 Dec 2024 • Diederick Vermetten, Jeroen Rook, Oliver L. Preuß, Jacob de Nobel, Carola Doerr, Manuel López-Ibañez, Heike Trautmann, Thomas Bäck
Benchmarking is one of the key ways in which we can gain insight into the strengths and weaknesses of optimization algorithms.
no code implementations • 24 Sep 2024 • Jacob de Nobel, Diederick Vermetten, Thomas H. W. Bäck, Anna V. Kononova
For lower dimensionalities (below 10), we find that using as little as 32 unique low discrepancy points performs similar or better than uniform sampling.
no code implementations • 29 May 2024 • Saba Sadeghi Ahouei, Jacob de Nobel, Aneta Neumann, Thomas Bäck, Frank Neumann
Our experiments show that our approach is highly successful in solving the instability issue of the performance ratios and leads to evolving reliable sets of chance constraints with significantly different performance for various types of algorithms.
no code implementations • 2 May 2024 • Jacob de Nobel, Diederick Vermetten, Anna V. Kononova, Ofer M. Shir, Thomas Bäck
Na\"ive restarts of global optimization solvers when operating on multimodal search landscapes may resemble the Coupon's Collector Problem, with a potential to waste significant function evaluations budget on revisiting the same basins of attractions.
1 code implementation • 10 Feb 2024 • Annie Wong, Jacob de Nobel, Thomas Bäck, Aske Plaat, Anna V. Kononova
We benchmark both deep policy networks and networks consisting of a single linear layer from observations to actions for three gradient-based methods, such as Proximal Policy Optimization.
no code implementations • 29 Jun 2023 • François Clément, Diederick Vermetten, Jacob de Nobel, Alexandre D. Jesus, Luís Paquete, Carola Doerr
In this work we compare 8 popular numerical black-box optimization algorithms on the $L_{\infty}$ star discrepancy computation problem, using a wide set of instances in dimensions 2 to 15.
no code implementations • 25 Apr 2023 • André Thomaser, Jacob de Nobel, Diederick Vermetten, Furong Ye, Thomas Bäck, Anna V. Kononova
In this work, we use the notion of the resolution of continuous variables to discretize problems from the continuous domain.
no code implementations • 8 Mar 2023 • Furong Ye, Frank Neumann, Jacob de Nobel, Aneta Neumann, Thomas Bäck
Parameter control has succeeded in accelerating the convergence process of evolutionary algorithms.
1 code implementation • 2 Feb 2023 • Frank Neumann, Aneta Neumann, Chao Qian, Viet Anh Do, Jacob de Nobel, Diederick Vermetten, Saba Sadeghi Ahouei, Furong Ye, Hao Wang, Thomas Bäck
Submodular functions play a key role in the area of optimization as they allow to model many real-world problems that face diminishing returns.
no code implementations • 14 Nov 2022 • Jacob de Nobel, Anna V. Kononova, Jeroen Briaire, Johan Frijns, Thomas Bäck
In the second part of this paper, the Convolutional Neural Network surrogate model was used by an Evolutionary Algorithm to optimize the shape of the stimulus waveform in terms energy efficiency.
no code implementations • 20 Apr 2022 • Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, Carola Doerr
In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase.
no code implementations • 13 Apr 2022 • Anja Jankovic, Diederick Vermetten, Ana Kostovska, Jacob de Nobel, Tome Eftimov, Carola Doerr
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations.
1 code implementation • 7 Nov 2021 • Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization.
no code implementations • 16 Apr 2021 • Jacob de Nobel, Hao Wang, Thomas Bäck
From our analysis, we saw that the features can classify the CMA-ES variants, or the function groups decently, and show a potential for predicting the performance of those variants.
1 code implementation • 25 Feb 2021 • Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task.