Search Results for author: Frank Hutter

Found 100 papers, 67 papers with code

NAS-Bench-x11 and the Power of Learning Curves

no code implementations NeurIPS 2021 Shen Yan, Colin White, Yash Savani, Frank Hutter

While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research.

Neural Architecture Search

CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning

1 code implementation5 Oct 2021 Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer

While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment.

Physical Simulations Representation Learning

Multi-headed Neural Ensemble Search

no code implementations9 Jul 2021 Ashwin Raaghav Narayanan, Arber Zela, Tonmoy Saikia, Thomas Brox, Frank Hutter

Ensembles of CNN models trained with different seeds (also known as Deep Ensembles) are known to achieve superior performance over a single copy of the CNN.

Bag of Tricks for Neural Architecture Search

no code implementations8 Jul 2021 Thomas Elsken, Benedikt Staffler, Arber Zela, Jan Hendrik Metzen, Frank Hutter

While neural architecture search methods have been successful in previous years and led to new state-of-the-art performance on various problems, they have also been criticized for being unstable, being highly sensitive with respect to their hyperparameters, and often not performing better than random search.

Neural Architecture Search

Well-tuned Simple Nets Excel on Tabular Datasets

1 code implementation NeurIPS 2021 Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka

Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures.

TempoRL: Learning When to Act

1 code implementation9 Jun 2021 André Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer

Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment.

Q-Learning

Self-Paced Context Evaluation for Contextual Reinforcement Learning

1 code implementation9 Jun 2021 Theresa Eimer, André Biedenkapp, Frank Hutter, Marius Lindauer

Reinforcement learning (RL) has made a lot of advances for solving a single problem in a given environment; but learning policies that generalize to unseen variations of a problem remains challenging.

DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization

1 code implementation20 May 2021 Noor Awad, Neeratyoy Mallik, Frank Hutter

Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever.

Hyperparameter Optimization Neural Architecture Search

DACBench: A Benchmark Library for Dynamic Algorithm Configuration

1 code implementation18 May 2021 Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriaensen, Frank Hutter, Marius Lindauer

Dynamic Algorithm Configuration (DAC) aims to dynamically control a target algorithm's hyperparameters in order to improve its performance.

Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization

1 code implementation ICML Workshop AutoML 2021 Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, Frank Hutter

Neural architecture search (NAS) and hyperparameter optimization (HPO) make deep learning accessible to non-experts by automatically finding the architecture of the deep neural network to use and tuning the hyperparameters of the used training pipeline.

Hyperparameter Optimization Neural Architecture Search

How Powerful are Performance Predictors in Neural Architecture Search?

1 code implementation NeurIPS 2021 Colin White, Arber Zela, Binxin Ru, Yang Liu, Frank Hutter

Early methods in the rapidly developing field of neural architecture search (NAS) required fully training thousands of neural networks.

Neural Architecture Search

TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation

1 code implementation ICCV 2021 Samuel G. Müller, Frank Hutter

Automatic augmentation methods have recently become a crucial pillar for strong model performance in vision tasks.

Data Augmentation Image Classification

On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning

1 code implementation26 Feb 2021 Baohe Zhang, Raghu Rajan, Luis Pineda, Nathan Lambert, André Biedenkapp, Kurtland Chua, Frank Hutter, Roberto Calandra

We demonstrate that this problem can be tackled effectively with automated HPO, which we demonstrate to yield significantly improved performance compared to human experts.

Hyperparameter Optimization Model-based Reinforcement Learning

Learning Synthetic Environments for Reinforcement Learning with Evolution Strategies

1 code implementation24 Jan 2021 Fabio Ferreira, Thomas Nierhoff, Frank Hutter

This work explores learning agent-agnostic synthetic environments (SEs) for Reinforcement Learning.

Acrobot

Regularization Cocktails

no code implementations1 Jan 2021 Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka

The regularization of prediction models is arguably the most crucial ingredient that allows Machine Learning solutions to generalize well on unseen data.

Hyperparameter Optimization

NASLib: A Modular and Flexible Neural Architecture Search Library

no code implementations1 Jan 2021 Michael Ruchte, Arber Zela, Julien Niklas Siems, Josif Grabocka, Frank Hutter

Neural Architecture Search (NAS) is one of the focal points for the Deep Learning community, but reproducing NAS methods is extremely challenging due to numerous low-level implementation details.

Neural Architecture Search

Differential Evolution for Neural Architecture Search

1 code implementation11 Dec 2020 Noor Awad, Neeratyoy Mallik, Frank Hutter

Neural architecture search (NAS) methods rely on a search strategy for deciding which architectures to evaluate next and a performance estimation strategy for assessing their performance (e. g., using full evaluations, multi-fidelity evaluations, or the one-shot model).

Neural Architecture Search

Convergence Analysis of Homotopy-SGD for non-convex optimization

no code implementations20 Nov 2020 Matilde Gargiani, Andrea Zanelli, Quoc Tran-Dinh, Moritz Diehl, Frank Hutter

In this work, we present a first-order stochastic algorithm based on a combination of homotopy methods and SGD, called Homotopy-Stochastic Gradient Descent (H-SGD), which finds interesting connections with some proposed heuristics in the literature, e. g. optimization by Gaussian continuation, training by diffusion, mollifying networks.

Hyperparameter Transfer Across Developer Adjustments

1 code implementation25 Oct 2020 Danny Stoll, Jörg K. H. Franke, Diane Wagner, Simon Selg, Frank Hutter

After developer adjustments to a machine learning (ML) algorithm, how can the results of an old hyperparameter optimization (HPO) automatically be used to speedup a new HPO?

Hyperparameter Optimization

On the Importance of Domain Model Configuration for Automated Planning Engines

no code implementations15 Oct 2020 Mauro Vallati, Lukas Chrpa, Thomas L. McCluskey, Frank Hutter

The development of domain-independent planners within the AI Planning community is leading to "off-the-shelf" technology that can be used in a wide range of applications.

Smooth Variational Graph Embeddings for Efficient Neural Architecture Search

2 code implementations9 Oct 2020 Jovita Lukasik, David Friede, Arber Zela, Frank Hutter, Margret Keuper

We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search space and show that our smooth embedding space allows to directly extrapolate the performance prediction to architectures outside the seen domain (e. g. with more operations).

Neural Architecture Search

Neural Model-based Optimization with Right-Censored Observations

no code implementations29 Sep 2020 Katharina Eggensperger, Kai Haase, Philipp Müller, Marius Lindauer, Frank Hutter

When fitting a regression model to predict the distribution of the outcomes, we cannot simply drop these right-censored observations, but need to properly model them.

Prior-guided Bayesian Optimization

no code implementations28 Sep 2020 Artur Souza, Luigi Nardi, Leonardo Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter

While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts.

MDP Playground: Controlling Orthogonal Dimensions of Hardness in Toy Environments

no code implementations28 Sep 2020 Raghu Rajan, Jessica Lizeth Borja Diaz, Suresh Guttikonda, Fabio Ferreira, André Biedenkapp, Frank Hutter

We present MDP Playground, an efficient benchmark for Reinforcement Learning (RL) algorithms with various dimensions of hardness that can be controlled independently to challenge algorithms in different ways and to obtain varying degrees of hardness in generated environments.

OpenAI Gym

Sample-Efficient Automated Deep Reinforcement Learning

1 code implementation ICLR 2021 Jörg K. H. Franke, Gregor Köhler, André Biedenkapp, Frank Hutter

Despite significant progress in challenging problems across various domains, applying state-of-the-art deep reinforcement learning (RL) algorithms remains challenging due to their sensitivity to the choice of hyperparameters.

Hyperparameter Optimization

NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search

1 code implementation22 Aug 2020 Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, Frank Hutter

To overcome this fundamental limitation, we propose NAS-Bench-301, the first surrogate NAS benchmark, using a search space containing $10^{18}$ architectures, many orders of magnitude larger than any previous tabular NAS benchmark.

Neural Architecture Search

Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning

4 code implementations8 Jul 2020 Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter

Automated Machine Learning (AutoML) supports practitioners and researchers with the tedious task of designing machine learning pipelines and has recently achieved substantial success.

AutoML Meta-Learning

Bayesian Optimization with a Prior for the Optimum

no code implementations25 Jun 2020 Artur Souza, Luigi Nardi, Leonardo B. Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter

We show that BOPrO is around 6. 67x faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application.

Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL

2 code implementations24 Jun 2020 Lucas Zimmer, Marius Lindauer, Frank Hutter

While early AutoML frameworks focused on optimizing traditional ML pipelines and their hyperparameters, a recent trend in AutoML is to focus on neural architecture search.

Neural Architecture Search

Learning Heuristic Selection with Dynamic Algorithm Configuration

1 code implementation15 Jun 2020 David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller, Marius Lindauer

We show that dynamic algorithm configuration can be used for dynamic heuristic selection which takes into account the internal search dynamics of a planning system.

Neural Ensemble Search for Uncertainty Estimation and Dataset Shift

1 code implementation NeurIPS 2021 Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris Holmes, Frank Hutter, Yee Whye Teh

On a variety of classification tasks and modern architecture search spaces, we show that the resulting ensembles outperform deep ensembles not only in terms of accuracy but also uncertainty calibration and robustness to dataset shift.

Image Classification Neural Architecture Search

On the Promise of the Stochastic Generalized Gauss-Newton Method for Training DNNs

1 code implementation3 Jun 2020 Matilde Gargiani, Andrea Zanelli, Moritz Diehl, Frank Hutter

This enables researchers to further study and improve this promising optimization technique and hopefully reconsider stochastic second-order methods as competitive optimization techniques for training DNNs; we also hope that the promise of SGN may lead to forward automatic differentiation being added to Tensorflow or Pytorch.

Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework

1 code implementation1 Jun 2020 André Biedenkapp, H. Furkan Bozkurt, Theresa Eimer, Frank Hutter, Marius Lindauer

The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on parameter tuning.

Transferring Optimality Across Data Distributions via Homotopy Methods

no code implementations ICLR 2020 Matilde Gargiani, Andrea Zanelli, Quoc Tran Dinh, Moritz Diehl, Frank Hutter

Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available.

Fine-tuning

Machine-Learning-Based Diagnostics of EEG Pathology

1 code implementation11 Feb 2020 Lukas Alexander Wilhelm Gemein, Robin Tibor Schirrmeister, Patryk Chrabąszcz, Daniel Wilson, Joschka Boedecker, Andreas Schulze-Bonhage, Frank Hutter, Tonio Ball

The results demonstrate that the proposed feature-based decoding framework can achieve accuracies on the same level as state-of-the-art deep neural networks.

EEG

NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search

1 code implementation ICLR 2020 Arber Zela, Julien Siems, Frank Hutter

One-shot neural architecture search (NAS) has played a crucial role in making NAS methods computationally feasible in practice.

Neural Architecture Search

Meta-Learning of Neural Architectures for Few-Shot Learning

1 code implementation CVPR 2020 Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, Frank Hutter

The recent progress in neural architecture search (NAS) has allowed scaling the automated design of neural architectures to real-world domains, such as object detection and semantic segmentation.

Few-Shot Learning Neural Architecture Search +2

OpenML-Python: an extensible Python API for OpenML

1 code implementation6 Nov 2019 Matthias Feurer, Jan N. van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, Frank Hutter

It also provides functionality to conduct machine learning experiments, upload the results to OpenML, and reproduce results which are stored on OpenML.

Neural Architecture Evolution in Deep Reinforcement Learning for Continuous Control

no code implementations28 Oct 2019 Jörg K. H. Franke, Gregor Köhler, Noor Awad, Frank Hutter

Current Deep Reinforcement Learning algorithms still heavily rely on handcrafted neural network architectures.

Continuous Control

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings

1 code implementation10 Oct 2019 Matilde Gargiani, Aaron Klein, Stefan Falkner, Frank Hutter

We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves.

Hyperparameter Optimization

Understanding and Robustifying Differentiable Architecture Search

1 code implementation ICLR 2020 Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, Frank Hutter

Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem.

Disparity Estimation Image Classification +1

MDP Playground: A Design and Debug Testbed for Reinforcement Learning

1 code implementation17 Sep 2019 Raghu Rajan, Jessica Lizeth Borja Diaz, Suresh Guttikonda, Fabio Ferreira, André Biedenkapp, Jan Ole von Hartz, Frank Hutter

We present \emph{MDP Playground}, an efficient testbed for Reinforcement Learning (RL) agents with \textit{orthogonal} dimensions that can be controlled independently to challenge agents in different ways and obtain varying degrees of hardness in generated environments.

OpenAI Gym

Best Practices for Scientific Research on Neural Architecture Search

no code implementations5 Sep 2019 Marius Lindauer, Frank Hutter

Finding a well-performing architecture is often tedious for both DL practitioners and researchers, leading to tremendous interest in the automation of this task by means of neural architecture search (NAS).

Neural Architecture Search

BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters

1 code implementation16 Aug 2019 Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Joshua Marben, Philipp Müller, Frank Hutter

Hyperparameter optimization and neural architecture search can become prohibitively expensive for regular black-box Bayesian optimization because the training and evaluation of a single model can easily take several hours.

Hyperparameter Optimization Neural Architecture Search

Towards White-box Benchmarks for Algorithm Control

no code implementations18 Jun 2019 André Biedenkapp, H. Furkan Bozkurt, Frank Hutter, Marius Lindauer

The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on tuned hyperparameter configurations.

Meta-Surrogate Benchmarking for Hyperparameter Optimization

1 code implementation NeurIPS 2019 Aaron Klein, Zhenwen Dai, Frank Hutter, Neil Lawrence, Javier Gonzalez

Despite the recent progress in hyperparameter optimization (HPO), available benchmarks that resemble real-world scenarios consist of a few and very large problem instances that are expensive to solve.

Hyperparameter Optimization

Towards Automatically-Tuned Deep Neural Networks

2 code implementations18 May 2019 Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, Matthias Urban, Michael Burkart, Maximilian Dippel, Marius Lindauer, Frank Hutter

Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks.

AutoML

AutoDispNet: Improving Disparity Estimation With AutoML

1 code implementation ICCV 2019 Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, Thomas Brox

In this work, we show how to use and extend existing AutoML techniques to efficiently optimize large-scale U-Net-like encoder-decoder architectures.

Disparity Estimation General Classification +1

Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization

1 code implementation13 May 2019 Aaron Klein, Frank Hutter

Due to the high computational demands executing a rigorous comparison between hyperparameter optimization (HPO) methods is often cumbersome.

Hyperparameter Optimization

NAS-Bench-101: Towards Reproducible Neural Architecture Search

4 code implementations25 Feb 2019 Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter

Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation.

Neural Architecture Search

Learning to Design RNA

3 code implementations ICLR 2019 Frederic Runge, Danny Stoll, Stefan Falkner, Frank Hutter

Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.

Meta-Learning

Neural Architecture Search: A Survey

1 code implementation16 Aug 2018 Thomas Elsken, Jan Hendrik Metzen, Frank Hutter

Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation.

Machine Translation Neural Architecture Search +2

Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search

3 code implementations18 Jul 2018 Arber Zela, Aaron Klein, Stefan Falkner, Frank Hutter

While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal.

Neural Architecture Search

BOHB: Robust and Efficient Hyperparameter Optimization at Scale

3 code implementations ICML 2018 Stefan Falkner, Aaron Klein, Frank Hutter

Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.

Hyperparameter Optimization

Training Generative Reversible Networks

1 code implementation5 Jun 2018 Robin Tibor Schirrmeister, Patryk Chrabąszcz, Frank Hutter, Tonio Ball

This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets.

Maximizing acquisition functions for Bayesian optimization

1 code implementation NeurIPS 2018 James T. Wilson, Frank Hutter, Marc Peter Deisenroth

Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process.

Global Optimization

Efficient Multi-objective Neural Architecture Search via Lamarckian Evolution

no code implementations ICLR 2019 Thomas Elsken, Jan Hendrik Metzen, Frank Hutter

Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts.

Neural Architecture Search

Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari

1 code implementation24 Feb 2018 Patryk Chrabaszcz, Ilya Loshchilov, Frank Hutter

Evolution Strategies (ES) have recently been demonstrated to be a viable alternative to reinforcement learning (RL) algorithms on a set of challenging deep RL problems, including Atari games and MuJoCo humanoid locomotion benchmarks.

Atari Games

Uncertainty Estimates and Multi-Hypotheses Networks for Optical Flow

1 code implementation ECCV 2018 Eddy Ilg, Özgün Çiçek, Silvio Galesso, Aaron Klein, Osama Makansi, Frank Hutter, Thomas Brox

Optical flow estimation can be formulated as an end-to-end supervised learning problem, which yields estimates with a superior accuracy-runtime tradeoff compared to alternative methodology.

Optical Flow Estimation

Practical Transfer Learning for Bayesian Optimization

1 code implementation6 Feb 2018 Matthias Feurer, Benjamin Letham, Frank Hutter, Eytan Bakshy

Bayesian optimization has become a standard technique for hyperparameter optimization of machine learning algorithms.

Gaussian Processes Hyperparameter Optimization +2

Fixing Weight Decay Regularization in Adam

no code implementations ICLR 2018 Ilya Loshchilov, Frank Hutter

We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor.

Image Classification

The reparameterization trick for acquisition functions

no code implementations1 Dec 2017 James T. Wilson, Riccardo Moriconi, Frank Hutter, Marc Peter Deisenroth

Bayesian optimization is a sample-efficient approach to solving global optimization problems.

Global Optimization

Decoupled Weight Decay Regularization

20 code implementations ICLR 2019 Ilya Loshchilov, Frank Hutter

L$_2$ regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam.

Image Classification

Neural Networks for Predicting Algorithm Runtime Distributions

no code implementations22 Sep 2017 Katharina Eggensperger, Marius Lindauer, Frank Hutter

Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance.

Warmstarting of Model-based Algorithm Configuration

no code implementations14 Sep 2017 Marius Lindauer, Frank Hutter

The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem.

Deep learning with convolutional neural networks for decoding and visualization of EEG pathology

2 code implementations26 Aug 2017 Robin Tibor Schirrmeister, Lukas Gemein, Katharina Eggensperger, Frank Hutter, Tonio Ball

We apply convolutional neural networks (ConvNets) to the task of distinguishing pathological from normal EEG recordings in the Temple University Hospital EEG Abnormal Corpus.

EEG

A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets

3 code implementations27 Jul 2017 Patryk Chrabaszcz, Ilya Loshchilov, Frank Hutter

The original ImageNet dataset is a popular large-scale benchmark for training Deep Neural Networks.

Neural Architecture Search

Pitfalls and Best Practices in Algorithm Configuration

2 code implementations17 May 2017 Katharina Eggensperger, Marius Lindauer, Frank Hutter

Good parameter settings are crucial to achieve high performance in many areas of artificial intelligence (AI), such as propositional satisfiability solving, AI planning, scheduling, and machine learning (in particular deep learning).

Experimental Design

Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

no code implementations30 Mar 2017 Katharina Eggensperger, Marius Lindauer, Holger H. Hoos, Frank Hutter, Kevin Leyton-Brown

In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures.

Hyperparameter Optimization

Deep learning with convolutional neural networks for EEG decoding and visualization

5 code implementations15 Mar 2017 Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, Tonio Ball

PLEASE READ AND CITE THE REVISED VERSION at Human Brain Mapping: http://onlinelibrary. wiley. com/doi/10. 1002/hbm. 23730/full Code available here: https://github. com/robintibor/braindecode

EEG Eeg Decoding

RoBO: A Flexible and Robust Bayesian Optimization Framework in Python

1 code implementation NIPS 2017 2017 Aaron Klein, Stefan Falkner, Numair Mansur, Frank Hutter

Bayesian optimization is a powerful approach for the global derivative-free optimization of non-convex expensive functions.

Hyperparameter Optimization

Asynchronous Stochastic Gradient MCMC with Elastic Coupling

no code implementations2 Dec 2016 Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter

We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling for problems where we can leverage (stochastic) gradients to define continuous dynamics which explore the target distribution.

Bayesian Optimization with Robust Bayesian Neural Networks

1 code implementation NeurIPS 2016 Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter

Bayesian optimization is a prominent method for optimizing expensive to evaluate black-box functions that is prominently applied to tuning the hyperparameters of machine learning algorithms.

Hyperparameter Optimization

SGDR: Stochastic Gradient Descent with Warm Restarts

16 code implementations13 Aug 2016 Ilya Loshchilov, Frank Hutter

Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions.

EEG Stochastic Optimization

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

1 code implementation23 May 2016 Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.

Hyperparameter Optimization

CMA-ES for Hyperparameter Optimization of Deep Neural Networks

no code implementations25 Apr 2016 Ilya Loshchilov, Frank Hutter

Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization.

Hyperparameter Optimization

Efficient and Robust Automated Machine Learning

2 code implementations NeurIPS 2015 Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, Frank Hutter

The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts.

Hyperparameter Optimization

Online Batch Selection for Faster Training of Neural Networks

1 code implementation19 Nov 2015 Ilya Loshchilov, Frank Hutter

We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam.

ASlib: A Benchmark Library for Algorithm Selection

2 code implementations8 Jun 2015 Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Frechette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, Joaquin Vanschoren

To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature.

The Configurable SAT Solver Challenge (CSSC)

no code implementations5 May 2015 Frank Hutter, Marius Lindauer, Adrian Balint, Sam Bayless, Holger Hoos, Kevin Leyton-Brown

It is well known that different solution strategies work well for different types of instances of hard combinatorial problems.

Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces

no code implementations14 Sep 2014 Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, Michael A. Osborne

In practical Bayesian optimization, we must often search over structures with differing numbers of parameters.

ParamILS: An Automatic Algorithm Configuration Framework

no code implementations15 Jan 2014 Frank Hutter, Thomas Stuetzle, Kevin Leyton-Brown, Holger H. Hoos

The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms.

Hyperparameter Optimization

A Kernel for Hierarchical Parameter Spaces

no code implementations21 Oct 2013 Frank Hutter, Michael A. Osborne

We define a family of kernels for mixed continuous/discrete hierarchical parameter spaces and show that they are positive definite.

Bayesian Optimization With Censored Response Data

no code implementations7 Oct 2013 Frank Hutter, Holger Hoos, Kevin Leyton-Brown

Bayesian optimization (BO) aims to minimize a given blackbox function using a model that is updated whenever new evidence about the function becomes available.

Bayesian Optimization in a Billion Dimensions via Random Embeddings

1 code implementation9 Jan 2013 Ziyu Wang, Frank Hutter, Masrour Zoghi, David Matheson, Nando de Freitas

Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration.

Algorithm Runtime Prediction: Methods & Evaluation

no code implementations5 Nov 2012 Frank Hutter, Lin Xu, Holger H. Hoos, Kevin Leyton-Brown

We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems.

Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms

1 code implementation18 Aug 2012 Chris Thornton, Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown

Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall.

Classification Feature Selection +2

Sequential Model-Based Optimization for General Algorithm Configuration

1 code implementation LION 2011 2011 Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown

State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance.

Hyperparameter Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.