Search Results for author: Carola Doerr

Found 94 papers, 23 papers with code

Large-scale Benchmarking of Metaphor-based Optimization Heuristics

no code implementations15 Feb 2024 Diederick Vermetten, Carola Doerr, Hao Wang, Anna V. Kononova, Thomas Bäck

The number of proposed iterative optimization heuristics is growing steadily, and with this growth, there have been many points of discussion within the wider community.

Benchmarking Experimental Design

MA-BBOB: A Problem Generator for Black-Box Optimization Using Affine Combinations and Shifts

no code implementations18 Dec 2023 Diederick Vermetten, Furong Ye, Thomas Bäck, Carola Doerr

Choosing a set of benchmark problems is often a key component of any empirical evaluation of iterative optimization heuristics.

Benchmarking

PS-AAS: Portfolio Selection for Automated Algorithm Selection in Black-Box Optimization

no code implementations14 Oct 2023 Ana Kostovska, Gjorgjina Cenikj, Diederick Vermetten, Anja Jankovic, Ana Nikolikj, Urban Skvorc, Peter Korosec, Carola Doerr, Tome Eftimov

Our proposed method creates algorithm behavior meta-representations, constructs a graph from a set of algorithms based on their meta-representation similarity, and applies a graph algorithm to select a final portfolio of diverse, representative, and non-redundant algorithms.

Comparing Algorithm Selection Approaches on Black-Box Optimization Problems

no code implementations30 Jun 2023 Ana Kostovska, Anja Jankovic, Diederick Vermetten, Sašo Džeroski, Tome Eftimov, Carola Doerr

Performance complementarity of solvers available to tackle black-box optimization problems gives rise to the important task of algorithm selection (AS).

Computing Star Discrepancies with Numerical Black-Box Optimization Algorithms

no code implementations29 Jun 2023 François Clément, Diederick Vermetten, Jacob de Nobel, Alexandre D. Jesus, Luís Paquete, Carola Doerr

In this work we compare 8 popular numerical black-box optimization algorithms on the $L_{\infty}$ star discrepancy computation problem, using a wide set of instances in dimensions 2 to 15.

Numerical Integration

MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts

no code implementations18 Jun 2023 Diederick Vermetten, Furong Ye, Thomas Bäck, Carola Doerr

Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima.

AutoML Benchmarking

DynamoRep: Trajectory-Based Population Dynamics for Classification of Black-box Optimization Problems

1 code implementation8 Jun 2023 Gjorgjina Cenikj, Gašper Petelin, Carola Doerr, Peter Korošec, Tome Eftimov

The application of machine learning (ML) models to the analysis of optimization algorithms requires the representation of optimization problems using numerical features.

Benchmarking Descriptive

Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

1 code implementation7 Jun 2023 Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer

Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.

Bayesian Optimization Benchmarking

Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances

no code implementations1 Jun 2023 Ana Nikolikj, Sašo Džeroski, Mario Andrés Muñoz, Carola Doerr, Peter Korošec, Tome Eftimov

In black-box optimization, it is essential to understand why an algorithm instance works on a set of problem instances while failing on others and provide explanations of its behavior.

Assessing the Generalizability of a Performance Predictive Model

no code implementations31 May 2023 Ana Nikolikj, Gjorgjina Cenikj, Gordana Ispirova, Diederick Vermetten, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, Tome Eftimov

A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model.

Sensitivity Analysis of RF+clust for Leave-one-problem-out Performance Prediction

no code implementations30 May 2023 Ana Nikolikj, Michal Pluháček, Carola Doerr, Peter Korošec, Tome Eftimov

That is, instead of considering cosine distance in the feature space, we consider a weighted distance measure, with weights depending on the relevance of the feature for the regression model.

regression

Using Affine Combinations of BBOB Problems for Performance Assessment

no code implementations8 Mar 2023 Diederick Vermetten, Furong Ye, Carola Doerr

By analyzing performance trajectories on more function combinations, we also show that aspects such as the scaling of objective functions and placement of the optimum can greatly impact how these results are interpreted.

Benchmarking

Comparison of High-Dimensional Bayesian Optimization Algorithms on BBOB

1 code implementation2 Mar 2023 Maria Laura Santoni, Elena Raponi, Renato De Leone, Carola Doerr

Bayesian Optimization (BO) is a class of black-box, surrogate-based heuristics that can efficiently optimize problems that are expensive to evaluate, and hence admit only small evaluation budgets.

Bayesian Optimization Vocal Bursts Intensity Prediction

Using Automated Algorithm Configuration for Parameter Control

no code implementations23 Feb 2023 Deyao Chen, Maxim Buzdalov, Carola Doerr, Nguyen Dang

Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion.

Tight Runtime Bounds for Static Unary Unbiased Evolutionary Algorithms on Linear Functions

no code implementations23 Feb 2023 Carola Doerr, Duri Andrea Janett, Johannes Lengler

In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator.

Evolutionary Algorithms

RF+clust for Leave-One-Problem-Out Performance Prediction

no code implementations23 Jan 2023 Ana Nikolikj, Carola Doerr, Tome Eftimov

Per-instance automated algorithm configuration and selection are gaining significant moments in evolutionary computation in recent years.

AutoML feature selection +1

Explainable Model-specific Algorithm Selection for Multi-Label Classification

no code implementations21 Nov 2022 Ana Kostovska, Carola Doerr, Sašo Džeroski, Dragi Kocev, Panče Panov, Tome Eftimov

To address this algorithm selection problem, we investigate in this work the quality of an automated approach that uses characteristics of the datasets - so-called features - and a trained algorithm selector to choose which algorithm to apply for a given task.

Classification Multi-Label Classification

OPTION: OPTImization Algorithm Benchmarking ONtology

no code implementations21 Nov 2022 Ana Kostovska, Diederick Vermetten, Carola Doerr, Saso Džeroski, Panče Panov, Tome Eftimov

Many optimization algorithm benchmarking platforms allow users to share their experimental data to promote reproducible and reusable research.

Benchmarking Data Integration

Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis

1 code implementation17 Nov 2022 Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr

Bayesian optimization (BO) algorithms form a class of surrogate-based heuristics, aimed at efficiently computing high-quality solutions for numerical black-box optimization problems.

AutoML Bayesian Optimization

Fast Re-Optimization of LeadingOnes with Frequent Changes

1 code implementation9 Sep 2022 Nina Bulanova, Arina Buzdalova, Carola Doerr

In this work, we first show that the re-optimization approach suggested by Doerr et al. reaches a limit when the problem instances are prone to more frequent changes.

Evolutionary Algorithms

Improving Nevergrad's Algorithm Selection Wizard NGOpt through Automated Algorithm Configuration

no code implementations9 Sep 2022 Risto Trajanov, Ana Nikolikj, Gjorgjina Cenikj, Fabien Teytaud, Mathurin Videau, Olivier Teytaud, Tome Eftimov, Manuel López-Ibáñez, Carola Doerr

Algorithm selection wizards are effective and versatile tools that automatically select an optimization algorithm given high-level information about the problem and available computational resources, such as number and type of decision variables, maximal number of evaluations, possibility to parallelize evaluations, etc.

Automated Algorithm Selection for Radar Network Configuration

no code implementations7 May 2022 Quentin Renau, Johann Dreo, Alain Peres, Yann Semet, Carola Doerr, Benjamin Doerr

The exact modeling of these instances is complex, as the quality of the configurations depends on a large number of parameters, on internal radar processing, and on the terrains on which the radars need to be placed.

High Dimensional Bayesian Optimization with Kernel Principal Component Analysis

no code implementations28 Apr 2022 Kirill Antonov, Elena Raponi, Hao Wang, Carola Doerr

Bayesian Optimization (BO) is a surrogate-based global optimization strategy that relies on a Gaussian Process regression (GPR) model to approximate the objective function and an acquisition function to suggest candidate points.

Bayesian Optimization GPR +2

Run Time Analysis for Random Local Search on Generalized Majority Functions

no code implementations27 Apr 2022 Carola Doerr, Martin S. Krejca

We prove upper bounds for the expected run time of random local search on this MAJORITY problem for its entire parameter spectrum.

Evolutionary Algorithms

SELECTOR: Selecting a Representative Benchmark Suite for Reproducible Statistical Comparison

no code implementations25 Apr 2022 Gjorgjina Cenikj, Ryan Dieter Lang, Andries Petrus Engelbrecht, Carola Doerr, Peter Korošec, Tome Eftimov

Fair algorithm evaluation is conditioned on the existence of high-quality benchmark datasets that are non-redundant and are representative of typical optimization scenarios.

Analyzing the Impact of Undersampling on the Benchmarking and Configuration of Evolutionary Algorithms

no code implementations20 Apr 2022 Diederick Vermetten, Hao Wang, Manuel López-Ibañez, Carola Doerr, Thomas Bäck

Particularly, we show that the number of runs used in many benchmarking studies, e. g., the default value of 15 suggested by the COCO environment, can be insufficient to reliably rank algorithms on well-known numerical optimization benchmarks.

Benchmarking Evolutionary Algorithms

Per-run Algorithm Selection with Warm-starting using Trajectory-based Features

no code implementations20 Apr 2022 Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, Carola Doerr

In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase.

Time Series Analysis

The Importance of Landscape Features for Performance Prediction of Modular CMA-ES Variants

1 code implementation15 Apr 2022 Ana Kostovska, Diederick Vermetten, Sašo Džeroski, Carola Doerr, Peter Korošec, Tome Eftimov

In addition, we have shown that by using classifiers that take the features relevance on the model accuracy, we are able to predict the status of individual modules in the CMA-ES configurations.

regression

Trajectory-based Algorithm Selection with Warm-starting

no code implementations13 Apr 2022 Anja Jankovic, Diederick Vermetten, Ana Kostovska, Jacob de Nobel, Tome Eftimov, Carola Doerr

We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations.

regression

Switching between Numerical Black-box Optimization Algorithms with Warm-starting Policies

no code implementations13 Apr 2022 Dominik Schröder, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck

In this work, we build on the recent study of Vermetten et al. [GECCO 2020], who presented a data-driven approach to investigate promising switches between pairs of algorithms for numerical black-box optimization.

Non-Elitist Selection Can Improve the Performance of Irace

1 code implementation17 Mar 2022 Furong Ye, Diederick L. Vermetten, Carola Doerr, Thomas Bäck

In addition, the obtained results indicate that non-elitist can obtain diverse algorithm configurations, which encourages us to explore a wider range of solutions to understand the behavior of algorithms.

Bayesian Optimization Evolutionary Algorithms

Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

1 code implementation7 Feb 2022 André Biedenkapp, Nguyen Dang, Martin S. Krejca, Frank Hutter, Carola Doerr

We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.

Benchmarking Evolutionary Algorithms

IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics

1 code implementation7 Nov 2021 Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck

IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization.

Bayesian Optimization Benchmarking

Automated Configuration of Genetic Algorithms by Tuning for Anytime Performance

1 code implementation11 Jun 2021 Furong Ye, Carola Doerr, Hao Wang, Thomas Bäck

Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation.

OPTION: OPTImization Algorithm Benchmarking ONtology

no code implementations24 Apr 2021 Ana Kostovska, Diederick Vermetten, Carola Doerr, Sašo Džeroski, Panče Panov, Tome Eftimov

Many platforms for benchmarking optimization algorithms offer users the possibility of sharing their experimental data with the purpose of promoting reproducible and reusable research.

Benchmarking Data Integration

Personalizing Performance Regression Models to Black-Box Optimization Problems

no code implementations22 Apr 2021 Tome Eftimov, Anja Jankovic, Gorjan Popovski, Carola Doerr, Peter Korošec

Accurately predicting the performance of different optimization algorithms for previously unseen problem instances is crucial for high-performing algorithm selection and configuration techniques.

regression

The Impact of Hyper-Parameter Tuning for Landscape-Aware Performance Regression and Algorithm Selection

no code implementations19 Apr 2021 Anja Jankovic, Gorjan Popovski, Tome Eftimov, Carola Doerr

By comparing a total number of 30 different models, each coupled with 2 complementary regression strategies, we derive guidelines for the tuning of the regression models and provide general recommendations for a more systematic use of classical machine learning models in landscape-aware algorithm selection.

BIG-bench Machine Learning regression

Tuning as a Means of Assessing the Benefits of New Ideas in Interplay with Existing Algorithmic Modules

1 code implementation25 Feb 2021 Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck

However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task.

Blending Dynamic Programming with Monte Carlo Simulation for Bounding the Running Time of Evolutionary Algorithms

no code implementations23 Feb 2021 Kirill Antonov, Maxim Buzdalov, Arina Buzdalova, Carola Doerr

With the goal to provide absolute lower bounds for the best possible running times that can be achieved by $(1+\lambda)$-type search heuristics on common benchmark problems, we recently suggested a dynamic programming approach that computes optimal expected running times and the regret values inferred when deviating from the optimal parameter choice.

Evolutionary Algorithms

Towards Large Scale Automated Algorithm Design by Integrating Modular Benchmarking Frameworks

2 code implementations12 Feb 2021 Amine Aziz-Alaoui, Carola Doerr, Johann Dreo

We present a first proof-of-concept use-case that demonstrates the efficiency of interfacing the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler.

Benchmarking

Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection

no code implementations12 Feb 2021 Furong Ye, Carola Doerr, Thomas Bäck

What complicates this decision further is that different algorithms may be best suited for different stages of the optimization process.

AutoML Benchmarking +1

Towards Feature-Based Performance Regression Using Trajectory Data

no code implementations10 Feb 2021 Anja Jankovic, Tome Eftimov, Carola Doerr

The evaluation of these points is costly, and the benefit of an ELA-based algorithm selection over a default algorithm must therefore be significant in order to pay off.

feature selection regression

Optimal Static Mutation Strength Distributions for the $(1+λ)$ Evolutionary Algorithm on OneMax

no code implementations9 Feb 2021 Maxim Buzdalov, Carola Doerr

However, only little is known so far about the influence of these distributions on the performance of evolutionary algorithms, and about the relationships between (dynamic) parameter control and (static) parameter sampling.

Evolutionary Algorithms

Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking

no code implementations8 Oct 2020 Laurent Meunier, Herilalaina Rakotoarison, Pak Kan Wong, Baptiste Roziere, Jeremy Rapin, Olivier Teytaud, Antoine Moreau, Carola Doerr

We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard.

Benchmarking

Linear Matrix Factorization Embeddings for Single-objective Optimization Landscapes

no code implementations30 Sep 2020 Tome Eftimov, Gorjan Popovski, Quentin Renau, Peter Korosec, Carola Doerr

Automated per-instance algorithm selection and configuration have shown promising performances for a number of classic optimization problems, including satisfiability, AI planning, and TSP.

Dimensionality Reduction Representation Learning

IOHanalyzer: Detailed Performance Analyses for Iterative Optimization Heuristics

3 code implementations8 Jul 2020 Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, Thomas Bäck

An R programming interface is provided for users preferring to have a finer control over the implemented functionalities.

Bayesian Optimization Benchmarking +1

High Dimensional Bayesian Optimization Assisted by Principal Component Analysis

1 code implementation2 Jul 2020 Elena Raponi, Hao Wang, Mariusz Bujny, Simonetta Boria, Carola Doerr

Bayesian Optimization (BO) is a surrogate-assisted global optimization technique that has been successfully applied in various fields, e. g., automated machine learning and design optimization.

Bayesian Optimization Computational Efficiency +2

Optimal Mutation Rates for the $(1+λ)$ EA on OneMax

no code implementations20 Jun 2020 Maxim Buzdalov, Carola Doerr

With this in hand, we compute for all population sizes $\lambda \in \{2^i \mid 0 \le i \le 18\}$ and for problem dimension $n \in \{1000, 2000, 5000\}$ which mutation rates minimize the expected running time and which ones maximize the expected progress.

Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the Mutation Rate of an Evolutionary Algorithm

no code implementations19 Jun 2020 Arina Buzdalova, Carola Doerr, Anna Rodionova

We demonstrate that our HQL mechanism achieves equal or superior performance to all techniques tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous parameter control methods -- simultaneously for all offspring population sizes $\lambda$.

Evolutionary Algorithms Q-Learning

Exploratory Landscape Analysis is Strongly Sensitive to the Sampling Strategy

no code implementations19 Jun 2020 Quentin Renau, Carola Doerr, Johann Dreo, Benjamin Doerr

While, not unexpectedly, increasing the number of sample points gives more robust estimates for the feature values, to our surprise we find that the feature value approximations for different sampling strategies do not converge to the same value.

General Classification

Landscape-Aware Fixed-Budget Performance Regression and Algorithm Selection for Modular CMA-ES Variants

no code implementations17 Jun 2020 Anja Jankovic, Carola Doerr

Automated algorithm selection promises to support the user in the decisive task of selecting a most suitable algorithm for a given problem.

regression

Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case

1 code implementation11 Jun 2020 Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck

One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem.

Benchmarking a $(μ+λ)$ Genetic Algorithm with Configurable Crossover Probability

no code implementations10 Jun 2020 Furong Ye, Hao Wang, Carola Doerr, Thomas Bäck

Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover.

Benchmarking

MATE: A Model-based Algorithm Tuning Engine

no code implementations27 Apr 2020 Mohamed El Yafrani, Marcella Scoczynski Ribeiro Martins, Inkyung Sung, Markus Wagner, Carola Doerr, Peter Nielsen

In contrast to most static (feature-independent) algorithm tuning engines such as irace and SPOT, our approach aims to derive the best parameter configuration of a given algorithm for a specific problem, exploiting the relationships between the algorithm parameters and the features of the problem.

Symbolic Regression

Variance Reduction for Better Sampling in Continuous Domains

no code implementations24 Apr 2020 Laurent Meunier, Carola Doerr, Jeremy Rapin, Olivier Teytaud

Design of experiments, random search, initialization of population-based methods, or sampling inside an epoch of an evolutionary algorithm use a sample drawn according to some probability distribution for approximating the location of an optimum.

Fixed-Target Runtime Analysis

no code implementations20 Apr 2020 Maxim Buzdalov, Benjamin Doerr, Carola Doerr, Dmitry Vinokurov

In this work, we conduct an in-depth study on the advantages and the limitations of fixed-target analyses.

Evolutionary Algorithms

Initial Design Strategies and their Effects on Sequential Model-Based Optimization

1 code implementation30 Mar 2020 Jakob Bossek, Carola Doerr, Pascal Kerschke

Most works, however, focus on the choice of the model, the acquisition function, and the strategy used to optimize the latter.

Benchmarking Discrete Optimization Heuristics with IOHprofiler

no code implementations19 Dec 2019 Carola Doerr, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, Thomas Bäck

Automated benchmarking environments aim to support researchers in understanding how different algorithms perform on different types of optimization problems.

Benchmarking

One-Shot Decision-Making with and without Surrogates

1 code implementation19 Dec 2019 Jakob Bossek, Pascal Kerschke, Aneta Neumann, Frank Neumann, Carola Doerr

We study three different decision tasks: classic one-shot optimization (only the best sample matters), one-shot optimization with surrogates (allowing to use surrogate models for selecting a design that need not necessarily be one of the evaluated samples), and one-shot regression (i. e., function approximation, with minimization of mean squared error as objective).

Decision Making regression

Sequential vs. Integrated Algorithm Selection and Configuration: A Case Study for the Modular CMA-ES

no code implementations12 Dec 2019 Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck

In this work we compare sequential and integrated algorithm selection and configuration approaches for the case of selecting and tuning the best out of 4608 variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) tested on the Black Box Optimization Benchmark (BBOB) suite.

Hyperparameter Optimization

Optimization of Chance-Constrained Submodular Functions

no code implementations26 Nov 2019 Benjamin Doerr, Carola Doerr, Aneta Neumann, Frank Neumann, Andrew M. Sutton

In this paper, we investigate submodular optimization problems with chance constraints.

Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates

no code implementations17 Apr 2019 Anna Rodionova, Kirill Antonov, Arina Buzdalova, Carola Doerr

We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound $p_{\min}=1/n^2$ gives better results than $p_{\min}=1/n$ when $\lambda$ is small.

Evolutionary Algorithms

Online Selection of CMA-ES Variants

no code implementations16 Apr 2019 Diederick Vermetten, Sander van Rijn, Thomas Bäck, Carola Doerr

An analysis of module activation indicates which modules are most crucial for the different phases of optimizing each of the 24 benchmark problems.

Maximizing Drift is Not Optimal for Solving OneMax

1 code implementation16 Apr 2019 Nathan Buskulic, Carola Doerr

More precisely, we show that for most fitness levels between $n/2$ and $2n/3$ the optimal mutation strengths are larger than the drift-maximizing ones.

Hyper-Parameter Tuning for the (1+(λ,λ)) GA

1 code implementation9 Apr 2019 Nguyen Dang, Carola Doerr

It is known that the $(1+(\lambda,\lambda))$~Genetic Algorithm (GA) with self-adjusting parameter choices achieves a linear expected optimization time on OneMax if its hyper-parameters are suitably chosen.

Self-Adjusting Mutation Rates with Provably Optimal Success Rules

1 code implementation7 Feb 2019 Benjamin Doerr, Carola Doerr, Johannes Lengler

The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms.

Evolutionary Algorithms

Fast Re-Optimization via Structural Diversity

no code implementations1 Feb 2019 Benjamin Doerr, Carola Doerr, Frank Neumann

We propose a simple diversity mechanism that prevents this behavior, thereby reducing the re-optimization time for LeadingOnes to $O(\gamma\delta n)$, where $\gamma$ is the population size used by the diversity mechanism and $\delta \le \gamma$ the Hamming distance of the new optimum from the previous solution.

Evolutionary Algorithms

Interpolating Local and Global Search by Controlling the Variance of Standard Bit Mutation

no code implementations17 Jan 2019 Furong Ye, Carola Doerr, Thomas Bäck

We introduce in this work a simple way to interpolate between the random global search of EAs and their deterministic counterparts which sample from a fixed radius only.

Evolutionary Algorithms

Towards a More Practice-Aware Runtime Analysis of Evolutionary Algorithms

no code implementations3 Dec 2018 Eduardo Carvalho Pinto, Carola Doerr

The predominant topic in this research domain is runtime analysis, which studies the time it takes a given EA to solve a given optimization problem.

Evolutionary Algorithms

IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics

5 code implementations11 Oct 2018 Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas Bäck

Given as input algorithms and problems written in C or Python, it provides as output a statistical evaluation of the algorithms' performance by means of the distribution on the fixed-target running time and the fixed-budget function values.

Benchmarking

Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling $(1+λ)$ EA Variants on OneMax and LeadingOnes

no code implementations17 Aug 2018 Carola Doerr, Furong Ye, Sander van Rijn, Hao Wang, Thomas Bäck

Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics.

Benchmarking Evolutionary Algorithms

Optimal Parameter Choices via Precise Black-Box Analysis

no code implementations9 Jul 2018 Benjamin Doerr, Carola Doerr, Jing Yang

It has been observed that some working principles of evolutionary algorithms, in particular, the influence of the parameters, cannot be understood from results on the asymptotic order of the runtime, but only from more precise results.

Evolutionary Algorithms

Theory of Parameter Control for Discrete Black-Box Optimization: Provable Performance Gains Through Dynamic Parameter Choices

no code implementations16 Apr 2018 Benjamin Doerr, Carola Doerr

Parameter control aims at realizing performance gains through a dynamic choice of the parameters which determine the behavior of the underlying optimization algorithm.

Evolutionary Algorithms General Classification

On the Effectiveness of Simple Success-Based Parameter Selection Mechanisms for Two Classical Discrete Black-Box Optimization Benchmark Problems

no code implementations4 Mar 2018 Carola Doerr, Markus Wagner

Despite significant empirical and theoretically supported evidence that non-static parameter choices can be strongly beneficial in evolutionary computation, the question how to best adjust parameter values plays only a marginal role in contemporary research on discrete black-box optimization.

Complexity Theory for Discrete Black-Box Optimization Heuristics

no code implementations6 Jan 2018 Carola Doerr

In this chapter we review the different black-box complexity models that have been proposed in the literature, survey the bounds that have been obtained for these models, and discuss how the interplay of running time analysis and black-box complexity can inspire new algorithmic solutions to well-researched problems in evolutionary computation.

Evolutionary Algorithms

The (1+1) Elitist Black-Box Complexity of LeadingOnes

no code implementations8 Apr 2016 Carola Doerr, Johannes Lengler

We regard the permutation- and bit-invariant version of \textsc{LeadingOnes} and prove that its (1+1) elitist black-box complexity is $\Omega(n^2)$, a bound that is matched by (1+1)-type evolutionary algorithms.

Evolutionary Algorithms

Introducing Elitist Black-Box Models: When Does Elitist Selection Weaken the Performance of Evolutionary Algorithms?

no code implementations27 Aug 2015 Carola Doerr, Johannes Lengler

Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and serves as an inspiration for the design of new genetic algorithms.

Evolutionary Algorithms

A Tight Runtime Analysis of the $(1+(λ, λ))$ Genetic Algorithm on OneMax

no code implementations19 Jun 2015 Benjamin Doerr, Carola Doerr

We first improve the upper bound on the runtime to $O(\max\{n\log(n)/\lambda, n\lambda \log\log(\lambda)/\log(\lambda)\})$.

Solving Problems with Unknown Solution Length at (Almost) No Extra Cost

no code implementations19 Jun 2015 Benjamin Doerr, Carola Doerr, Timo Kötzing

For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield an expected optimization time that is of the same order as that of the (1+1) EA knowing the solution length.

Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

no code implementations13 Apr 2015 Benjamin Doerr, Carola Doerr

While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm.

Evolutionary Algorithms

OneMax in Black-Box Models with Several Restrictions

no code implementations10 Apr 2015 Carola Doerr, Johannes Lengler

Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics.

Evolutionary Algorithms

Unbiased Black-Box Complexities of Jump Functions

no code implementations30 Mar 2014 Benjamin Doerr, Carola Doerr, Timo Kötzing

We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution.

Collecting Coupons with Random Initial Stake

no code implementations29 Aug 2013 Benjamin Doerr, Carola Doerr

Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets).

Cannot find the paper you are looking for? You can Submit a new open access paper.