no code implementations • 15 Feb 2024 • Diederick Vermetten, Carola Doerr, Hao Wang, Anna V. Kononova, Thomas Bäck
The number of proposed iterative optimization heuristics is growing steadily, and with this growth, there have been many points of discussion within the wider community.
no code implementations • 18 Dec 2023 • Diederick Vermetten, Furong Ye, Thomas Bäck, Carola Doerr
Choosing a set of benchmark problems is often a key component of any empirical evaluation of iterative optimization heuristics.
no code implementations • 14 Oct 2023 • Ana Kostovska, Gjorgjina Cenikj, Diederick Vermetten, Anja Jankovic, Ana Nikolikj, Urban Skvorc, Peter Korosec, Carola Doerr, Tome Eftimov
Our proposed method creates algorithm behavior meta-representations, constructs a graph from a set of algorithms based on their meta-representation similarity, and applies a graph algorithm to select a final portfolio of diverse, representative, and non-redundant algorithms.
no code implementations • 29 Sep 2023 • Elena Raponi, Nathanael Rakotonirina Carraz, Jérémy Rapin, Carola Doerr, Olivier Teytaud
BO-based algorithms are popular in the ML community, as they are used for hyperparameter optimization and more generally for algorithm configuration.
no code implementations • 30 Jun 2023 • Ana Kostovska, Anja Jankovic, Diederick Vermetten, Sašo Džeroski, Tome Eftimov, Carola Doerr
Performance complementarity of solvers available to tackle black-box optimization problems gives rise to the important task of algorithm selection (AS).
no code implementations • 29 Jun 2023 • François Clément, Diederick Vermetten, Jacob de Nobel, Alexandre D. Jesus, Luís Paquete, Carola Doerr
In this work we compare 8 popular numerical black-box optimization algorithms on the $L_{\infty}$ star discrepancy computation problem, using a wide set of instances in dimensions 2 to 15.
no code implementations • 18 Jun 2023 • Diederick Vermetten, Furong Ye, Thomas Bäck, Carola Doerr
Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima.
1 code implementation • 8 Jun 2023 • Gjorgjina Cenikj, Gašper Petelin, Carola Doerr, Peter Korošec, Tome Eftimov
The application of machine learning (ML) models to the analysis of optimization algorithms requires the representation of optimization problems using numerical features.
1 code implementation • 7 Jun 2023 • Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer
Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.
no code implementations • 1 Jun 2023 • Ana Nikolikj, Sašo Džeroski, Mario Andrés Muñoz, Carola Doerr, Peter Korošec, Tome Eftimov
In black-box optimization, it is essential to understand why an algorithm instance works on a set of problem instances while failing on others and provide explanations of its behavior.
no code implementations • 31 May 2023 • Ana Nikolikj, Gjorgjina Cenikj, Gordana Ispirova, Diederick Vermetten, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, Tome Eftimov
A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model.
no code implementations • 30 May 2023 • Ana Nikolikj, Michal Pluháček, Carola Doerr, Peter Korošec, Tome Eftimov
That is, instead of considering cosine distance in the feature space, we consider a weighted distance measure, with weights depending on the relevance of the feature for the regression model.
no code implementations • 8 Mar 2023 • Diederick Vermetten, Furong Ye, Carola Doerr
By analyzing performance trajectories on more function combinations, we also show that aspects such as the scaling of objective functions and placement of the optimum can greatly impact how these results are interpreted.
1 code implementation • 2 Mar 2023 • Maria Laura Santoni, Elena Raponi, Renato De Leone, Carola Doerr
Bayesian Optimization (BO) is a class of black-box, surrogate-based heuristics that can efficiently optimize problems that are expensive to evaluate, and hence admit only small evaluation budgets.
no code implementations • 23 Feb 2023 • Deyao Chen, Maxim Buzdalov, Carola Doerr, Nguyen Dang
Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion.
no code implementations • 23 Feb 2023 • Carola Doerr, Duri Andrea Janett, Johannes Lengler
In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator.
no code implementations • 24 Jan 2023 • Ana Kostovska, Diederick Vermetten, Sašo Džeroski, Panče Panov, Tome Eftimov, Carola Doerr
In this work, we evaluate a performance prediction model built on top of the extension of the recently proposed OPTION ontology.
no code implementations • 23 Jan 2023 • Ana Nikolikj, Carola Doerr, Tome Eftimov
Per-instance automated algorithm configuration and selection are gaining significant moments in evolutionary computation in recent years.
no code implementations • 21 Nov 2022 • Ana Kostovska, Carola Doerr, Sašo Džeroski, Dragi Kocev, Panče Panov, Tome Eftimov
To address this algorithm selection problem, we investigate in this work the quality of an automated approach that uses characteristics of the datasets - so-called features - and a trained algorithm selector to choose which algorithm to apply for a given task.
no code implementations • 21 Nov 2022 • Ana Kostovska, Diederick Vermetten, Carola Doerr, Saso Džeroski, Panče Panov, Tome Eftimov
Many optimization algorithm benchmarking platforms allow users to share their experimental data to promote reproducible and reusable research.
1 code implementation • 17 Nov 2022 • Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr
Bayesian optimization (BO) algorithms form a class of surrogate-based heuristics, aimed at efficiently computing high-quality solutions for numerical black-box optimization problems.
1 code implementation • 2 Nov 2022 • Carolin Benjamins, Elena Raponi, Anja Jankovic, Koen van der Blom, Maria Laura Santoni, Marius Lindauer, Carola Doerr
We also compare this to a random schedule and round-robin selection of EI and PI.
1 code implementation • 9 Sep 2022 • Nina Bulanova, Arina Buzdalova, Carola Doerr
In this work, we first show that the re-optimization approach suggested by Doerr et al. reaches a limit when the problem instances are prone to more frequent changes.
no code implementations • 9 Sep 2022 • Risto Trajanov, Ana Nikolikj, Gjorgjina Cenikj, Fabien Teytaud, Mathurin Videau, Olivier Teytaud, Tome Eftimov, Manuel López-Ibáñez, Carola Doerr
Algorithm selection wizards are effective and versatile tools that automatically select an optimization algorithm given high-level information about the problem and available computational resources, such as number and type of decision variables, maximal number of evaluations, possibility to parallelize evaluations, etc.
no code implementations • 7 May 2022 • Quentin Renau, Johann Dreo, Alain Peres, Yann Semet, Carola Doerr, Benjamin Doerr
The exact modeling of these instances is complex, as the quality of the configurations depends on a large number of parameters, on internal radar processing, and on the terrains on which the radars need to be placed.
no code implementations • 28 Apr 2022 • Kirill Antonov, Elena Raponi, Hao Wang, Carola Doerr
Bayesian Optimization (BO) is a surrogate-based global optimization strategy that relies on a Gaussian Process regression (GPR) model to approximate the objective function and an acquisition function to suggest candidate points.
no code implementations • 27 Apr 2022 • Carola Doerr, Martin S. Krejca
We prove upper bounds for the expected run time of random local search on this MAJORITY problem for its entire parameter spectrum.
no code implementations • 25 Apr 2022 • Gjorgjina Cenikj, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, Tome Eftimov
Fair algorithm evaluation is conditioned on the existence of high-quality benchmark datasets that are non-redundant and are representative of typical optimization scenarios.
no code implementations • 20 Apr 2022 • Diederick Vermetten, Hao Wang, Manuel López-Ibañez, Carola Doerr, Thomas Bäck
Particularly, we show that the number of runs used in many benchmarking studies, e. g., the default value of 15 suggested by the COCO environment, can be insufficient to reliably rank algorithms on well-known numerical optimization benchmarks.
no code implementations • 20 Apr 2022 • Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, Carola Doerr
In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase.
1 code implementation • 15 Apr 2022 • Ana Kostovska, Diederick Vermetten, Sašo Džeroski, Carola Doerr, Peter Korošec, Tome Eftimov
In addition, we have shown that by using classifiers that take the features relevance on the model accuracy, we are able to predict the status of individual modules in the CMA-ES configurations.
no code implementations • 13 Apr 2022 • Anja Jankovic, Diederick Vermetten, Ana Kostovska, Jacob de Nobel, Tome Eftimov, Carola Doerr
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations.
no code implementations • 13 Apr 2022 • Dominik Schröder, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
In this work, we build on the recent study of Vermetten et al. [GECCO 2020], who presented a data-driven approach to investigate promising switches between pairs of algorithms for numerical black-box optimization.
1 code implementation • 17 Mar 2022 • Furong Ye, Diederick L. Vermetten, Carola Doerr, Thomas Bäck
In addition, the obtained results indicate that non-elitist can obtain diverse algorithm configurations, which encourages us to explore a wider range of solutions to understand the behavior of algorithms.
1 code implementation • 7 Feb 2022 • André Biedenkapp, Nguyen Dang, Martin S. Krejca, Frank Hutter, Carola Doerr
We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.
1 code implementation • 7 Nov 2021 • Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization.
no code implementations • 29 Sep 2021 • Elena Raponi, Nathanaël Carraz Rakotonirina, Jérémy Rapin, Olivier Teytaud, Carola Doerr
Machine learning has invaded various domains of computer science, including black-box optimization.
1 code implementation • 11 Jun 2021 • Furong Ye, Carola Doerr, Hao Wang, Thomas Bäck
Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation.
no code implementations • 24 Apr 2021 • Ana Kostovska, Diederick Vermetten, Carola Doerr, Sašo Džeroski, Panče Panov, Tome Eftimov
Many platforms for benchmarking optimization algorithms offer users the possibility of sharing their experimental data with the purpose of promoting reproducible and reusable research.
no code implementations • 22 Apr 2021 • Tome Eftimov, Anja Jankovic, Gorjan Popovski, Carola Doerr, Peter Korošec
Accurately predicting the performance of different optimization algorithms for previously unseen problem instances is crucial for high-performing algorithm selection and configuration techniques.
no code implementations • 19 Apr 2021 • Anja Jankovic, Gorjan Popovski, Tome Eftimov, Carola Doerr
By comparing a total number of 30 different models, each coupled with 2 complementary regression strategies, we derive guidelines for the tuning of the regression models and provide general recommendations for a more systematic use of classical machine learning models in landscape-aware algorithm selection.
1 code implementation • 25 Feb 2021 • Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task.
no code implementations • 23 Feb 2021 • Kirill Antonov, Maxim Buzdalov, Arina Buzdalova, Carola Doerr
With the goal to provide absolute lower bounds for the best possible running times that can be achieved by $(1+\lambda)$-type search heuristics on common benchmark problems, we recently suggested a dynamic programming approach that computes optimal expected running times and the regret values inferred when deviating from the optimal parameter choice.
2 code implementations • 12 Feb 2021 • Amine Aziz-Alaoui, Carola Doerr, Johann Dreo
We present a first proof-of-concept use-case that demonstrates the efficiency of interfacing the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler.
no code implementations • 12 Feb 2021 • Furong Ye, Carola Doerr, Thomas Bäck
What complicates this decision further is that different algorithms may be best suited for different stages of the optimization process.
no code implementations • 10 Feb 2021 • Anja Jankovic, Tome Eftimov, Carola Doerr
The evaluation of these points is costly, and the benefit of an ELA-based algorithm selection over a default algorithm must therefore be significant in order to pay off.
no code implementations • 9 Feb 2021 • Maxim Buzdalov, Carola Doerr
However, only little is known so far about the influence of these distributions on the performance of evolutionary algorithms, and about the relationships between (dynamic) parameter control and (static) parameter sampling.
no code implementations • 1 Feb 2021 • Quentin Renau, Johann Dreo, Carola Doerr, Benjamin Doerr
We show that the classification accuracy transfers to settings in which several instances are involved in training and testing.
1 code implementation • 15 Dec 2020 • Noor Awad, Gresa Shala, Difan Deng, Neeratyoy Mallik, Matthias Feurer, Katharina Eggensperger, Andre' Biedenkapp, Diederick Vermetten, Hao Wang, Carola Doerr, Marius Lindauer, Frank Hutter
In this short note, we describe our submission to the NeurIPS 2020 BBO challenge.
no code implementations • 8 Oct 2020 • Laurent Meunier, Herilalaina Rakotoarison, Pak Kan Wong, Baptiste Roziere, Jeremy Rapin, Olivier Teytaud, Antoine Moreau, Carola Doerr
We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard.
no code implementations • 30 Sep 2020 • Tome Eftimov, Gorjan Popovski, Quentin Renau, Peter Korosec, Carola Doerr
Automated per-instance algorithm selection and configuration have shown promising performances for a number of classic optimization problems, including satisfiability, AI planning, and TSP.
3 code implementations • 8 Jul 2020 • Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, Thomas Bäck
An R programming interface is provided for users preferring to have a finer control over the implemented functionalities.
no code implementations • 7 Jul 2020 • Thomas Bartz-Beielstein, Carola Doerr, Daan van den Berg, Jakob Bossek, Sowmya Chandrasekaran, Tome Eftimov, Andreas Fischbach, Pascal Kerschke, William La Cava, Manuel Lopez-Ibanez, Katherine M. Malan, Jason H. Moore, Boris Naujoks, Patryk Orzechowski, Vanessa Volz, Markus Wagner, Thomas Weise
This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world.
1 code implementation • 2 Jul 2020 • Elena Raponi, Hao Wang, Mariusz Bujny, Simonetta Boria, Carola Doerr
Bayesian Optimization (BO) is a surrogate-assisted global optimization technique that has been successfully applied in various fields, e. g., automated machine learning and design optimization.
no code implementations • 20 Jun 2020 • Maxim Buzdalov, Carola Doerr
With this in hand, we compute for all population sizes $\lambda \in \{2^i \mid 0 \le i \le 18\}$ and for problem dimension $n \in \{1000, 2000, 5000\}$ which mutation rates minimize the expected running time and which ones maximize the expected progress.
no code implementations • 19 Jun 2020 • Arina Buzdalova, Carola Doerr, Anna Rodionova
We demonstrate that our HQL mechanism achieves equal or superior performance to all techniques tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous parameter control methods -- simultaneously for all offspring population sizes $\lambda$.
no code implementations • 19 Jun 2020 • Quentin Renau, Carola Doerr, Johann Dreo, Benjamin Doerr
While, not unexpectedly, increasing the number of sample points gives more robust estimates for the feature values, to our surprise we find that the feature value approximations for different sampling strategies do not converge to the same value.
no code implementations • 17 Jun 2020 • Anja Jankovic, Carola Doerr
Automated algorithm selection promises to support the user in the decisive task of selecting a most suitable algorithm for a given problem.
1 code implementation • 11 Jun 2020 • Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem.
no code implementations • 10 Jun 2020 • Furong Ye, Hao Wang, Carola Doerr, Thomas Bäck
Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover.
no code implementations • 27 Apr 2020 • Mohamed El Yafrani, Marcella Scoczynski Ribeiro Martins, Inkyung Sung, Markus Wagner, Carola Doerr, Peter Nielsen
In contrast to most static (feature-independent) algorithm tuning engines such as irace and SPOT, our approach aims to derive the best parameter configuration of a given algorithm for a specific problem, exploiting the relationships between the algorithm parameters and the features of the problem.
no code implementations • 24 Apr 2020 • Laurent Meunier, Carola Doerr, Jeremy Rapin, Olivier Teytaud
Design of experiments, random search, initialization of population-based methods, or sampling inside an epoch of an evolutionary algorithm use a sample drawn according to some probability distribution for approximating the location of an optimum.
no code implementations • 20 Apr 2020 • Maxim Buzdalov, Benjamin Doerr, Carola Doerr, Dmitry Vinokurov
In this work, we conduct an in-depth study on the advantages and the limitations of fixed-target analyses.
1 code implementation • 30 Mar 2020 • Jakob Bossek, Carola Doerr, Pascal Kerschke
Most works, however, focus on the choice of the model, the acquisition function, and the strategy used to optimize the latter.
no code implementations • 19 Dec 2019 • Carola Doerr, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, Thomas Bäck
Automated benchmarking environments aim to support researchers in understanding how different algorithms perform on different types of optimization problems.
1 code implementation • 19 Dec 2019 • Jakob Bossek, Pascal Kerschke, Aneta Neumann, Frank Neumann, Carola Doerr
We study three different decision tasks: classic one-shot optimization (only the best sample matters), one-shot optimization with surrogates (allowing to use surrogate models for selecting a design that need not necessarily be one of the evaluated samples), and one-shot regression (i. e., function approximation, with minimization of mean squared error as objective).
no code implementations • 12 Dec 2019 • Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck
In this work we compare sequential and integrated algorithm selection and configuration approaches for the case of selecting and tuning the best out of 4608 variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) tested on the Black Box Optimization Benchmark (BBOB) suite.
no code implementations • 26 Nov 2019 • Benjamin Doerr, Carola Doerr, Aneta Neumann, Frank Neumann, Andrew M. Sutton
In this paper, we investigate submodular optimization problems with chance constraints.
no code implementations • 17 Apr 2019 • Anna Rodionova, Kirill Antonov, Arina Buzdalova, Carola Doerr
We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound $p_{\min}=1/n^2$ gives better results than $p_{\min}=1/n$ when $\lambda$ is small.
no code implementations • 16 Apr 2019 • Diederick Vermetten, Sander van Rijn, Thomas Bäck, Carola Doerr
An analysis of module activation indicates which modules are most crucial for the different phases of optimizing each of the 24 benchmark problems.
1 code implementation • 16 Apr 2019 • Nathan Buskulic, Carola Doerr
More precisely, we show that for most fitness levels between $n/2$ and $2n/3$ the optimal mutation strengths are larger than the drift-maximizing ones.
1 code implementation • 9 Apr 2019 • Nguyen Dang, Carola Doerr
It is known that the $(1+(\lambda,\lambda))$~Genetic Algorithm (GA) with self-adjusting parameter choices achieves a linear expected optimization time on OneMax if its hyper-parameters are suitably chosen.
1 code implementation • 7 Feb 2019 • Benjamin Doerr, Carola Doerr, Johannes Lengler
The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms.
no code implementations • 1 Feb 2019 • Benjamin Doerr, Carola Doerr, Frank Neumann
We propose a simple diversity mechanism that prevents this behavior, thereby reducing the re-optimization time for LeadingOnes to $O(\gamma\delta n)$, where $\gamma$ is the population size used by the diversity mechanism and $\delta \le \gamma$ the Hamming distance of the new optimum from the previous solution.
no code implementations • 17 Jan 2019 • Furong Ye, Carola Doerr, Thomas Bäck
We introduce in this work a simple way to interpolate between the random global search of EAs and their deterministic counterparts which sample from a fixed radius only.
no code implementations • 20 Dec 2018 • Peyman Afshani, Manindra Agrawal, Benjamin Doerr, Carola Doerr, Kasper Green Larsen, Kurt Mehlhorn
We study the query complexity of a permutation-based variant of the guessing game Mastermind.
no code implementations • 3 Dec 2018 • Eduardo Carvalho Pinto, Carola Doerr
The predominant topic in this research domain is runtime analysis, which studies the time it takes a given EA to solve a given optimization problem.
5 code implementations • 11 Oct 2018 • Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas Bäck
Given as input algorithms and problems written in C or Python, it provides as output a statistical evaluation of the algorithms' performance by means of the distribution on the fixed-target running time and the fixed-budget function values.
no code implementations • 17 Aug 2018 • Carola Doerr, Furong Ye, Sander van Rijn, Hao Wang, Thomas Bäck
Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics.
no code implementations • 9 Jul 2018 • Benjamin Doerr, Carola Doerr, Jing Yang
It has been observed that some working principles of evolutionary algorithms, in particular, the influence of the parameters, cannot be understood from results on the asymptotic order of the runtime, but only from more precise results.
no code implementations • 16 Apr 2018 • Benjamin Doerr, Carola Doerr
Parameter control aims at realizing performance gains through a dynamic choice of the parameters which determine the behavior of the underlying optimization algorithm.
no code implementations • 4 Mar 2018 • Carola Doerr, Markus Wagner
Despite significant empirical and theoretically supported evidence that non-static parameter choices can be strongly beneficial in evolutionary computation, the question how to best adjust parameter values plays only a marginal role in contemporary research on discrete black-box optimization.
no code implementations • 15 Feb 2018 • Aneta Neumann, Wanru Gao, Carola Doerr, Frank Neumann, Markus Wagner
Diversity plays a crucial role in evolutionary computation.
no code implementations • 6 Jan 2018 • Carola Doerr
In this chapter we review the different black-box complexity models that have been proposed in the literature, survey the bounds that have been obtained for these models, and discuss how the interplay of running time analysis and black-box complexity can inspire new algorithmic solutions to well-researched problems in evolutionary computation.
no code implementations • 12 Apr 2016 • Benjamin Doerr, Carola Doerr, Timo Kötzing
The most common representation in evolutionary computation are bit strings.
no code implementations • 8 Apr 2016 • Carola Doerr, Johannes Lengler
We regard the permutation- and bit-invariant version of \textsc{LeadingOnes} and prove that its (1+1) elitist black-box complexity is $\Omega(n^2)$, a bound that is matched by (1+1)-type evolutionary algorithms.
no code implementations • 27 Aug 2015 • Carola Doerr, Johannes Lengler
Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and serves as an inspiration for the design of new genetic algorithms.
no code implementations • 19 Jun 2015 • Benjamin Doerr, Carola Doerr
We first improve the upper bound on the runtime to $O(\max\{n\log(n)/\lambda, n\lambda \log\log(\lambda)/\log(\lambda)\})$.
no code implementations • 19 Jun 2015 • Benjamin Doerr, Carola Doerr, Timo Kötzing
For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield an expected optimization time that is of the same order as that of the (1+1) EA knowing the solution length.
no code implementations • 13 Apr 2015 • Benjamin Doerr, Carola Doerr
While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm.
no code implementations • 10 Apr 2015 • Carola Doerr, Johannes Lengler
Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics.
no code implementations • 30 Mar 2014 • Benjamin Doerr, Carola Doerr, Timo Kötzing
We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution.
no code implementations • 29 Aug 2013 • Benjamin Doerr, Carola Doerr
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets).
no code implementations • 7 Apr 2013 • Carola Doerr, Francois-Michel De Rainville
It is thus the most studied discrepancy notion.