Search Results for author: Shuhei Watanabe

Found 9 papers, 8 papers with code

Derivation of Closed Form of Expected Improvement for Gaussian Process Trained on Log-Transformed Objective

no code implementations27 Nov 2024 Shuhei Watanabe

Expected Improvement (EI) is arguably the most widely used acquisition function in Bayesian optimization.

Bayesian Optimization

Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks

2 code implementations4 Mar 2024 Shuhei Watanabe, Neeratyoy Mallik, Edward Bergman, Frank Hutter

While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs).

Benchmarking

Python Wrapper for Simulating Multi-Fidelity Optimization on HPO Benchmarks without Any Wait

1 code implementation27 May 2023 Shuhei Watanabe

However, since the actual runtime of a DL training is significantly different from its query response time, simulators of an asynchronous HPO, e. g. multi-fidelity optimization, must wait for the actual runtime at each iteration in a na\"ive implementation; otherwise, the evaluation order during simulation does not match with the real experiment.

Python Tool for Visualizing Variability of Pareto Fronts over Multiple Runs

1 code implementation15 May 2023 Shuhei Watanabe

Hyperparameter optimization is crucial to achieving high performance in deep learning.

Hyperparameter Optimization

PED-ANOVA: Efficiently Quantifying Hyperparameter Importance in Arbitrary Subspaces

1 code implementation20 Apr 2023 Shuhei Watanabe, Archit Bansal, Frank Hutter

The recent rise in popularity of Hyperparameter Optimization (HPO) for deep learning has highlighted the role that good hyperparameter (HP) space design can play in training strong models.

Hyperparameter Optimization

c-TPE: Tree-structured Parzen Estimator with Inequality Constraints for Expensive Hyperparameter Optimization

1 code implementation26 Nov 2022 Shuhei Watanabe, Frank Hutter

In this work, we propose constrained TPE (c-TPE), an extension of the widely-used versatile Bayesian optimization method, tree-structured Parzen estimator (TPE), to handle these constraints.

Bayesian Optimization Hyperparameter Optimization

Warm Starting CMA-ES for Hyperparameter Optimization

2 code implementations13 Dec 2020 Masahiro Nomura, Shuhei Watanabe, Youhei Akimoto, Yoshihiko Ozaki, Masaki Onishi

Hyperparameter optimization (HPO), formulated as black-box optimization (BBO), is recognized as essential for automation and high performance of machine learning approaches.

Bayesian Optimization Hyperparameter Optimization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.