Search Results for author: Furong Ye

Found 18 papers, 7 papers with code

Better Understandings and Configurations in MaxSAT Local Search Solvers via Anytime Performance Analysis

1 code implementation11 Mar 2024 Furong Ye, Chuan Luo, Shaowei Cai

Though numerous solvers have been proposed for the MaxSAT problem, and the benchmark environment such as MaxSAT Evaluations provides a platform for the comparison of the state-of-the-art solvers, existing assessments were usually evaluated based on the quality, e. g., fitness, of the best-found solutions obtained within a given running time budget.

Hyperparameter Optimization SMAC+

Impact of spatial transformations on landscape features of CEC2022 basic benchmark problems

no code implementations12 Feb 2024 Haoran Yin, Diederick Vermetten, Furong Ye, Thomas H. W. Bäck, Anna V. Kononova

When benchmarking optimization heuristics, we need to take care to avoid an algorithm exploiting biases in the construction of the used problems.

Benchmarking

MA-BBOB: A Problem Generator for Black-Box Optimization Using Affine Combinations and Shifts

no code implementations18 Dec 2023 Diederick Vermetten, Furong Ye, Thomas Bäck, Carola Doerr

Choosing a set of benchmark problems is often a key component of any empirical evaluation of iterative optimization heuristics.

Benchmarking

MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts

no code implementations18 Jun 2023 Diederick Vermetten, Furong Ye, Thomas Bäck, Carola Doerr

Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima.

AutoML Benchmarking

When to be Discrete: Analyzing Algorithm Performance on Discretized Continuous Problems

no code implementations25 Apr 2023 André Thomaser, Jacob de Nobel, Diederick Vermetten, Furong Ye, Thomas Bäck, Anna V. Kononova

In this work, we use the notion of the resolution of continuous variables to discretize problems from the continuous domain.

Using Affine Combinations of BBOB Problems for Performance Assessment

no code implementations8 Mar 2023 Diederick Vermetten, Furong Ye, Carola Doerr

By analyzing performance trajectories on more function combinations, we also show that aspects such as the scaling of objective functions and placement of the optimum can greatly impact how these results are interpreted.

Benchmarking

Non-Elitist Selection Can Improve the Performance of Irace

1 code implementation17 Mar 2022 Furong Ye, Diederick L. Vermetten, Carola Doerr, Thomas Bäck

In addition, the obtained results indicate that non-elitist can obtain diverse algorithm configurations, which encourages us to explore a wider range of solutions to understand the behavior of algorithms.

Bayesian Optimization Evolutionary Algorithms

IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics

1 code implementation7 Nov 2021 Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, Thomas Bäck

IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization.

Bayesian Optimization Benchmarking

Automated Configuration of Genetic Algorithms by Tuning for Anytime Performance

1 code implementation11 Jun 2021 Furong Ye, Carola Doerr, Hao Wang, Thomas Bäck

Finding the best configuration of algorithms' hyperparameters for a given optimization problem is an important task in evolutionary computation.

Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection

no code implementations12 Feb 2021 Furong Ye, Carola Doerr, Thomas Bäck

What complicates this decision further is that different algorithms may be best suited for different stages of the optimization process.

AutoML Benchmarking +1

IOHanalyzer: Detailed Performance Analyses for Iterative Optimization Heuristics

3 code implementations8 Jul 2020 Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, Thomas Bäck

An R programming interface is provided for users preferring to have a finer control over the implemented functionalities.

Bayesian Optimization Benchmarking +1

Benchmarking a $(μ+λ)$ Genetic Algorithm with Configurable Crossover Probability

no code implementations10 Jun 2020 Furong Ye, Hao Wang, Carola Doerr, Thomas Bäck

Moreover, we observe that the ``fast'' mutation scheme with its are power-law distributed mutation strengths outperforms standard bit mutation on complex optimization tasks when it is combined with crossover, but performs worse in the absence of crossover.

Benchmarking

Benchmarking Discrete Optimization Heuristics with IOHprofiler

no code implementations19 Dec 2019 Carola Doerr, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, Thomas Bäck

Automated benchmarking environments aim to support researchers in understanding how different algorithms perform on different types of optimization problems.

Benchmarking

Interpolating Local and Global Search by Controlling the Variance of Standard Bit Mutation

no code implementations17 Jan 2019 Furong Ye, Carola Doerr, Thomas Bäck

We introduce in this work a simple way to interpolate between the random global search of EAs and their deterministic counterparts which sample from a fixed radius only.

Evolutionary Algorithms

IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics

5 code implementations11 Oct 2018 Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas Bäck

Given as input algorithms and problems written in C or Python, it provides as output a statistical evaluation of the algorithms' performance by means of the distribution on the fixed-target running time and the fixed-budget function values.

Benchmarking

Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling $(1+λ)$ EA Variants on OneMax and LeadingOnes

no code implementations17 Aug 2018 Carola Doerr, Furong Ye, Sander van Rijn, Hao Wang, Thomas Bäck

Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics.

Benchmarking Evolutionary Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.