Hyperparameter Optimization

278 papers with code • 1 benchmarks • 3 datasets

Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.

Source: Data-driven model for fracturing design optimization: focus on building digital database and production forecast

Libraries

Use these libraries to find Hyperparameter Optimization models and implementations
3 papers
7,393
3 papers
1,420
See all 14 libraries.

Most implemented papers

Practical Bayesian Optimization of Machine Learning Algorithms

HIPS/Spearmint NeurIPS 2012

In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP).

Scalable Bayesian Optimization Using Deep Neural Networks

automl/pybnn 19 Feb 2015

Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations.

BOHB: Robust and Efficient Hyperparameter Optimization at Scale

automl/HpBandSter ICML 2018

Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.

Tune: A Research Platform for Distributed Model Selection and Training

ray-project/ray 13 Jul 2018

We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.

Benchmarking Automatic Machine Learning Frameworks

EpistasisLab/tpot 17 Aug 2018

AutoML serves as the bridge between varying levels of expertise when designing machine learning systems and expedites the data science process.

Random Search and Reproducibility for Neural Architecture Search

microsoft/nn-meter 20 Feb 2019

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.

Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science

rhiever/tpot 20 Mar 2016

As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts.

Online Learning Rate Adaptation with Hypergradient Descent

gbaydin/hypergradient-descent ICLR 2018

We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice.

Automatic Gradient Boosting

ja-thomas/autoxgboost 10 Jul 2018

Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference.

Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions

asteroidhouse/self-tuning-networks ICLR 2019

Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems.