no code implementations • 27 Jan 2025 • Marcel Chwiałkowski, Benjamin Doerr, Martin S. Krejca
For the best hypothetical population size, our result matches, up to polylogarithmic factors, the typical quadratic runtime that many randomized search heuristics exhibit on LeadingOnes.
1 code implementation • 16 Dec 2024 • Benjamin Doerr, Tudor Ivan, Martin S. Krejca
We prove that this modified NSGA-II can optimize the three benchmarks efficiently also for many objectives, in contrast to the exponential lower runtime bound previously shown for OneMinMax with three or more objectives.
1 code implementation • 16 Dec 2024 • Benjamin Doerr, Martin S. Krejca, Günter Rudolph
Unfortunately, the vast majority of these results regard problems with binary or continuous decision variables -- the theoretical analysis of randomized search heuristics for unbounded integer domains is almost nonexistent.
no code implementations • 15 Nov 2024 • Benjamin Doerr, Dimitri Korkotashvili, Martin S. Krejca
The NSGA-II is the most prominent multi-objective evolutionary algorithm (cited more than 50, 000 times).
no code implementations • 31 Aug 2024 • Denis Antipov, Benjamin Doerr
Randomized search heuristics (RHSs) are generally believed to be robust to noise.
no code implementations • 25 Jul 2024 • Weijie Zheng, Yao Gao, Benjamin Doerr
These results suggest that our truthful version of the NSGA-II has the same good performance as the classic NSGA-II in two objectives, but can resolve the drastic problems in more than two objectives.
no code implementations • 19 Jul 2024 • Benjamin Doerr, Johannes F. Lutzeyer
This is at least as good as the runtime of simple elitist EAs.
no code implementations • 2 May 2024 • Benjamin Doerr, Martin S. Krejca, Noé Weeks
For standard bit mutation, we prove an expected runtime of $O(n N \log n + n^{n/(2N)} N \log n)$ function evaluations.
no code implementations • 19 Apr 2024 • Simon Wietheger, Benjamin Doerr
This is the first time that such tight bounds are proven for many-objective uses of these MOEAs.
1 code implementation • 5 Apr 2024 • Benjamin Doerr, Martin S. Krejca, Nguyen Vu
Besides providing a superior algorithm for the TSS problem, this work shows that randomized parameter choices and elementary greedy heuristics can give better results than complex algorithms and costly parameter tuning.
no code implementations • 4 Apr 2024 • Benjamin Doerr, Joshua Knowles, Aneta Neumann, Frank Neumann
We consider whether conditions exist under which block-coordinate descent is asymptotically efficient in evolutionary multi-objective optimization, addressing an open problem.
no code implementations • 2 Apr 2024 • Denis Antipov, Benjamin Doerr, Alexandra Ivanova
The only previous result in this direction regarded the less realistic one-bit noise model, required a population size super-linear in the problem size, and proved a runtime guarantee roughly cubic in the noiseless runtime for the OneMax benchmark.
no code implementations • 13 Mar 2024 • Benjamin Doerr, Andrew James Kelley
In their recent work, C. Doerr and Krejca (Transactions on Evolutionary Computation, 2023) proved upper bounds on the expected runtime of the randomized local search heuristic on generalized Needle functions.
no code implementations • 16 Dec 2023 • Weijie Zheng, Benjamin Doerr
To this aim, we first propose a many-objective counterpart, the m-objective mOJZJ, of the bi-objective OJZJ, which is the first many-objective multimodal benchmark for runtime analysis.
1 code implementation • 6 Oct 2023 • Benjamin Doerr, Martin S. Krejca
We show that the bivariate EDA mutual-information-maximizing input clustering, without any problem-specific modification, quickly generates a model that behaves very similarly to a theoretically ideal model for EBOM, which samples each of the exponentially many optima with the same maximal probability.
no code implementations • 22 May 2023 • Sacha Cerf, Benjamin Doerr, Benjamin Hebras, Yakob Kahane, Simon Wietheger
Recently, the first mathematical runtime guarantees have been obtained for this algorithm, however only for synthetic benchmark problems.
no code implementations • 17 May 2023 • Matthieu Dinot, Benjamin Doerr, Ulysse Hennebelle, Sebastian Will
We prove that when bit-wise prior noise with rate $p \le \alpha/n$, $\alpha$ a suitable constant, is present, the \emph{simple evolutionary multi-objective optimizer} (SEMO) without any adjustments to cope with noise finds the Pareto front of the OneMinMax benchmark in time $O(n^2\log n)$, just as in the case without noise.
no code implementations • 8 May 2023 • Alexandra Ivanova, Denis Antipov, Benjamin Doerr
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
1 code implementation • 21 Apr 2023 • Benjamin Doerr, Taha El Ghazi El Houssaini, Amirhossein Rajabi, Carsten Witt
Even with the optimal temperature (the only parameter of the MA), the MA optimizes most cliff functions less efficiently than simple elitist evolutionary algorithms (EAs), which can only leave the local optimum by generating a superior solution possibly far away.
no code implementations • 20 Apr 2023 • Benjamin Doerr, Arthur Dremaux, Johannes Lutzeyer, Aurélien Stumpf
In recent work, Lissovoi, Oliveto, and Warwicker (Artificial Intelligence (2023)) proved that the Move Acceptance Hyper-Heuristic (MAHH) leaves the local optimum of the multimodal cliff benchmark with remarkable efficiency.
no code implementations • 13 Mar 2023 • Benjamin Doerr, Andrei Lissovoi, Pietro S. Oliveto
Recently it has been proven that simple GP systems can efficiently evolve a conjunction of $n$ variables if they are equipped with the minimal required components.
no code implementations • 28 Feb 2023 • Firas Ben Jedidia, Benjamin Doerr, Martin S. Krejca
Roughly speaking, when the variables take $r$ different values, the time for genetic drift to become significant is $r$ times shorter than in the binary case.
no code implementations • 24 Feb 2023 • Benjamin Doerr, Aymen Echarghaoui, Mohammed Jamal, Martin S. Krejca
From this better understanding of the population diversity, we obtain stronger runtime guarantees, among them the statement that for all $c\ln(n)\le\mu \le n/\log n$, with $c$ a suitable constant, the runtime of the $(\mu+1)$ GA on $\mathrm{Jump}_k$, with $k \ge 3$, is $O(n^{k-1})$.
no code implementations • 16 Feb 2023 • Benjamin Doerr, Andrew James Kelley
We also use this method to analyze the runtime of the $(1+1)$ evolutionary algorithm on a new benchmark consisting of $n/\ell$ plateaus of effective size $2^\ell-1$ which have to be optimized sequentially in a LeadingOnes fashion.
no code implementations • 23 Nov 2022 • Weijie Zheng, Benjamin Doerr
The NSGA-II is one of the most prominent algorithms to solve multi-objective optimization problems.
1 code implementation • 15 Nov 2022 • Simon Wietheger, Benjamin Doerr
In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives.
no code implementations • 7 Oct 2022 • Benjamin Doerr, Omar El Hadri, Adrien Pinard
The $(1+(\lambda,\lambda))$ genetic algorithm is a recently proposed single-objective evolutionary algorithm with several interesting properties.
no code implementations • 28 Sep 2022 • Benjamin Doerr, Zhongdi Qu
Due to the more complicated population dynamics of the NSGA-II, none of the existing runtime guarantees for this algorithm is accompanied by a non-trivial lower bound.
no code implementations • 18 Aug 2022 • Benjamin Doerr, Zhongdi Qu
Very recently, the first mathematical runtime analyses for the NSGA-II, the most common multi-objective evolutionary algorithm, have been conducted.
no code implementations • 5 Jul 2022 • Benjamin Doerr, Yassine Ghannane, Marouane Ibn Brahim
We observe that it not only leads to simpler proofs, but also reduces the runtime on jump functions with odd jump size by a factor of $\Theta(n)$.
no code implementations • 22 Jun 2022 • Benjamin Doerr, Marc Dufay
We propose a general formulation of a univariate estimation-of-distribution algorithm (EDA).
no code implementations • 18 Jun 2022 • Weijie Zheng, Benjamin Doerr
Building on a recent quantitative analysis of how the population size leads to genetic drift, we design a smart-restart mechanism for EDAs.
no code implementations • 7 May 2022 • Quentin Renau, Johann Dreo, Alain Peres, Yann Semet, Carola Doerr, Benjamin Doerr
The exact modeling of these instances is complex, as the quality of the configurations depends on a large number of parameters, on internal radar processing, and on the terrains on which the radars need to be placed.
no code implementations • 28 Apr 2022 • Benjamin Doerr, Zhongdi Qu
Very recently, the first mathematical runtime analyses of the multi-objective evolutionary optimizer NSGA-II have been conducted.
no code implementations • 15 Apr 2022 • Benjamin Doerr, Yassine Ghannane, Marouane Ibn Brahim
We observe that it not only leads to simpler proofs, but also reduces the runtime on jump functions with odd jump size by a factor of $\Theta(n)$.
no code implementations • 5 Apr 2022 • Benjamin Doerr, Amirhossein Rajabi, Carsten Witt
We prove that Simulated Annealing with an appropriate cooling schedule computes arbitrarily tight constant-factor approximations to the minimum spanning tree problem in polynomial time.
no code implementations • 5 Mar 2022 • Weijie Zheng, Benjamin Doerr
In this work, we study how well it approximates the Pareto front when the population size is smaller.
no code implementations • 28 Jan 2022 • Benjamin Doerr, Amirhossein Rajabi
Two mechanisms have recently been proposed that can significantly speed up finding distant improving solutions via mutation, namely using a random mutation rate drawn from a heavy-tailed distribution ("fast mutation", Doerr et al. (2017)) and increasing the mutation strength based on stagnation detection (Rajabi and Witt (2020)).
1 code implementation • 16 Dec 2021 • Weijie Zheng, Benjamin Doerr
The non-dominated sorting genetic algorithm II (NSGA-II) is the most intensively used multi-objective evolutionary algorithm (MOEA) in real-world applications.
no code implementations • 14 Sep 2021 • Shouda Wang, Weijie Zheng, Benjamin Doerr
Our finding that the unary unbiased black-box complexity is only $O(n^2)$ suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time.
no code implementations • 7 May 2021 • Henry Bambury, Antoine Bultel, Benjamin Doerr
We prove that several previous results extend to this more general class: for all {$k \le \frac{n^{1/3}}{\ln{n}}$} and $\delta < k$, the optimal mutation rate for the $(1+1)$~EA is $\frac{\delta}{n}$, and the fast $(1+1)$~EA runs faster than the classical $(1+1)$~EA by a factor super-exponential in $\delta$.
no code implementations • 14 Apr 2021 • Denis Antipov, Maxim Buzdalov, Benjamin Doerr
On the other hand, this algorithm is also very efficient on jump functions, where the best static parameters are very different from those necessary to optimize simple problems.
no code implementations • 7 Apr 2021 • Benjamin Doerr, Timo Kötzing
One of the first and easy to use techniques for proving run time bounds for evolutionary algorithms is the so-called method of fitness levels by Wegener.
no code implementations • 1 Feb 2021 • Quentin Renau, Johann Dreo, Carola Doerr, Benjamin Doerr
We show that the classification accuracy transfers to settings in which several instances are involved in training and testing.
no code implementations • 14 Dec 2020 • Weijie Zheng, Benjamin Doerr
As a first step towards a deeper understanding of how evolutionary algorithms solve multimodal multiobjective problems, we propose the OJZJ problem, a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark.
no code implementations • 16 Jul 2020 • Benjamin Doerr, Martin S. Krejca
In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem.
no code implementations • 30 Jun 2020 • Benjamin Doerr, Frank Neumann
The theory of evolutionary computation for discrete search spaces has made significant progress in the last ten years.
no code implementations • 22 Jun 2020 • Denis Antipov, Maxim Buzdalov, Benjamin Doerr
The mathematical runtime analysis of evolutionary algorithms traditionally regards the time an algorithm needs to find a solution of a certain quality when initialized with a random population.
no code implementations • 19 Jun 2020 • Quentin Renau, Carola Doerr, Johann Dreo, Benjamin Doerr
While, not unexpectedly, increasing the number of sample points gives more robust estimates for the feature values, to our surprise we find that the feature value approximations for different sampling strategies do not converge to the same value.
no code implementations • 8 Jun 2020 • Benjamin Doerr
We use an elementary argument building on group actions to prove that the selection-free steady state genetic algorithm analyzed by Sutton and Witt (GECCO 2019) takes an expected number of $\Omega(2^n / \sqrt n)$ iterations to find any particular target search point.
no code implementations • 5 Jun 2020 • Denis Antipov, Benjamin Doerr
To obtain this performance, however, a non-standard parameter setting depending on the jump size $k$ was used.
no code implementations • 2 May 2020 • Benjamin Doerr
We discuss in more detail Lehre's (PPSN 2010) \emph{negative drift in populations} method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms for discrete search spaces.
no code implementations • 20 Apr 2020 • Maxim Buzdalov, Benjamin Doerr, Carola Doerr, Dmitry Vinokurov
In this work, we conduct an in-depth study on the advantages and the limitations of fixed-target analyses.
no code implementations • 15 Apr 2020 • Benjamin Doerr, Weijie Zheng
One of the key difficulties in using estimation-of-distribution algorithms is choosing the population size(s) appropriately: Too small values lead to genetic drift, which can cause enormous difficulties.
no code implementations • 14 Apr 2020 • Denis Antipov, Benjamin Doerr, Vitalii Karavaev
In this work, we conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark.
1 code implementation • 14 Apr 2020 • Denis Antipov, Maxim Buzdalov, Benjamin Doerr
In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact.
no code implementations • 13 Apr 2020 • Benjamin Doerr
We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in heuristic search and we prove several such results.
no code implementations • 10 Apr 2020 • Benjamin Doerr, Martin Krejca
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LeadingOnes benchmark function in the desirable regime with low genetic drift.
no code implementations • 2 Apr 2020 • Benjamin Doerr
This is the first runtime result for a non-elitist algorithm on a multi-modal problem that is tight apart from lower order terms.
no code implementations • 26 Nov 2019 • Benjamin Doerr, Carola Doerr, Aneta Neumann, Frank Neumann, Andrew M. Sutton
In this paper, we investigate submodular optimization problems with chance constraints.
no code implementations • 31 Oct 2019 • Benjamin Doerr, Weijie Zheng
This paper further proves that for PBIL with parameters $\mu$, $\lambda$, and $\rho$, in an expected number of $\Theta(\mu/\rho^2)$ iterations the sampling frequency of a neutral bit leaves the interval $[\Theta(\rho/\mu), 1-\Theta(\rho/\mu)]$ and then always the same value is sampled for this bit, that is, the frequency approaches the corresponding boundary value with maximum speed.
no code implementations • 18 Aug 2019 • Benjamin Doerr
We prove that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size $k$.
no code implementations • 17 Apr 2019 • Benjamin Doerr
In this work, we show that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size.
no code implementations • 15 Apr 2019 • Denis Antipov, Benjamin Doerr, Quentin Yang
Understanding when evolutionary algorithms are efficient or not, and how they efficiently solve problems, is one of the central research tasks in evolutionary computation.
no code implementations • 11 Apr 2019 • Benjamin Doerr, Timo Kötzing
Drift analysis aims at translating the expected progress of an evolutionary algorithm (or more generally, a random process) into a probabilistic guarantee on its run time (hitting time).
no code implementations • 28 Mar 2019 • Benjamin Doerr, Andrei Lissovoi, Pietro S. Oliveto
Recently it has been proved that simple GP systems can efficiently evolve the conjunction of $n$ variables if they are equipped with the minimal required components.
no code implementations • 26 Mar 2019 • Benjamin Doerr
We prove that the compact genetic algorithm (cGA) with hypothetical population size $\mu = \Omega(\sqrt n \log n) \cap \text{poly}(n)$ with high probability finds the optimum of any $n$-dimensional jump function with jump size $k < \frac 1 {20} \ln n$ in $O(\mu \sqrt n)$ iterations.
1 code implementation • 7 Feb 2019 • Benjamin Doerr, Carola Doerr, Johannes Lengler
The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms.
no code implementations • 1 Feb 2019 • Benjamin Doerr, Carola Doerr, Frank Neumann
We propose a simple diversity mechanism that prevents this behavior, thereby reducing the re-optimization time for LeadingOnes to $O(\gamma\delta n)$, where $\gamma$ is the population size used by the diversity mechanism and $\delta \le \gamma$ the Hamming distance of the new optimum from the previous solution.
no code implementations • 28 Dec 2018 • Denis Antipov, Benjamin Doerr
In this work, we analyze this long-standing problem and show the asymptotically tight result that the runtime $T$, the number of iterations until the optimum is found, satisfies \[E[T] = \Theta\bigg(\frac{n\log n}{\lambda}+\frac{n}{\lambda / \mu} + \frac{n\log^+\log^+ \lambda/ \mu}{\log^+ \lambda / \mu}\bigg),\] where $\log^+ x := \max\{1, \log x\}$ for all $x > 0$.
no code implementations • 20 Dec 2018 • Peyman Afshani, Manindra Agrawal, Benjamin Doerr, Carola Doerr, Kasper Green Larsen, Kurt Mehlhorn
We study the query complexity of a permutation-based variant of the guessing game Mastermind.
no code implementations • 9 Dec 2018 • Benjamin Doerr, Weijie Zheng
On the technical side, we observe that the strong stochastic dependencies in the random experiment describing a run of BDE prevent us from proving all desired results with the mathematical rigor that was successfully used in the analysis of other evolutionary algorithms.
no code implementations • 30 Nov 2018 • Benjamin Doerr, Carsten Witt, Jing Yang
We propose and analyze a self-adaptive version of the $(1,\lambda)$ evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation.
no code implementations • 10 Jul 2018 • Benjamin Doerr, Martin Krejca
Estimation-of-distribution algorithms (EDAs) are randomized search heuristics that create a probabilistic model of the solution space, which is updated iteratively, based on the quality of the solutions sampled according to the model.
no code implementations • 9 Jul 2018 • Benjamin Doerr, Carola Doerr, Jing Yang
It has been observed that some working principles of evolutionary algorithms, in particular, the influence of the parameters, cannot be understood from results on the asymptotic order of the runtime, but only from more precise results.
no code implementations • 6 Jun 2018 • Benjamin Doerr, Timo Kötzing, J. A. Gregor Lagodzinski, Johannes Lengler
While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations.
no code implementations • 4 Jun 2018 • Denis Antipov, Benjamin Doerr
To gain a better theoretical understanding of how evolutionary algorithms (EAs) cope with plateaus of constant fitness, we propose the $n$-dimensional Plateau$_k$ function as natural benchmark and analyze how different variants of the $(1 + 1)$ EA optimize it.
no code implementations • 16 Apr 2018 • Benjamin Doerr, Carola Doerr
Parameter control aims at realizing performance gains through a dynamic choice of the parameters which determine the behavior of the underlying optimization algorithm.
no code implementations • 20 Jan 2018 • Benjamin Doerr
This chapter collects several probabilistic tools that proved to be useful in the analysis of randomized search heuristics.
no code implementations • 13 Jan 2018 • Benjamin Doerr
Apart from few exceptions, the mathematical runtime analysis of evolutionary algorithms is mostly concerned with expected runtimes.
no code implementations • 1 Dec 2017 • Benjamin Doerr
We give an elementary proof of the fact that a binomial random variable $X$ with parameters $n$ and $0. 29/n \le p < 1$ with probability at least $1/4$ strictly exceeds its expectation.
no code implementations • 14 Apr 2017 • Maxim Buzdalov, Benjamin Doerr
We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size.
no code implementations • 7 Apr 2017 • Benjamin Doerr, Christian Gießen, Carsten Witt, Jing Yang
We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms in discrete search spaces.
2 code implementations • 9 Mar 2017 • Benjamin Doerr, Huu Phuoc Le, Régis Makhmara, Ta Duy Nguyen
We prove that the $(1+1)$ EA with this heavy-tailed mutation rate optimizes any $\jump_{m, n}$ function in a time that is only a small polynomial (in~$m$) factor above the one stemming from the optimal rate for this $m$.
no code implementations • 12 Apr 2016 • Benjamin Doerr, Carola Doerr, Timo Kötzing
The most common representation in evolutionary computation are bit strings.
no code implementations • 4 Apr 2016 • Benjamin Doerr
The $(1+(\lambda,\lambda))$ genetic algorithm is one of the few algorithms for which a super-constant speed-up through the use of crossover could be proven.
no code implementations • 19 Jun 2015 • Benjamin Doerr, Carola Doerr
We first improve the upper bound on the runtime to $O(\max\{n\log(n)/\lambda, n\lambda \log\log(\lambda)/\log(\lambda)\})$.
no code implementations • 19 Jun 2015 • Benjamin Doerr, Carola Doerr, Timo Kötzing
For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield an expected optimization time that is of the same order as that of the (1+1) EA knowing the solution length.
no code implementations • 15 Jun 2015 • Laurent Hoeltgen, Markus Mainberger, Sebastian Hoffmann, Joachim Weickert, Ching Hoo Tang, Simon Setzer, Daniel Johannsen, Frank Neumann, Benjamin Doerr
Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED.
no code implementations • 13 Apr 2015 • Benjamin Doerr, Carola Doerr
While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm.
no code implementations • 30 Mar 2014 • Benjamin Doerr, Carola Doerr, Timo Kötzing
We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution.
no code implementations • 29 Aug 2013 • Benjamin Doerr, Carola Doerr
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets).