Search Results for author: Aaron Klein

Found 21 papers, 15 papers with code

Online Optimization of Stimulation Speed in an Auditory Brain-Computer Interface under Time Constraints

no code implementations26 Aug 2021 Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann

We could show that for 8 out of 13 subjects, the proposed approach using Bayesian optimization succeeded to select the individually optimal SOA out of multiple evaluated SOA values.

Dynamic Pruning of a Neural Network via Gradient Signal-to-Noise Ratio

no code implementations ICML Workshop AutoML 2021 Julien Niklas Siems, Aaron Klein, Cedric Archambeau, Maren Mahsereci

Dynamic sparsity pruning undoes this limitation and allows to adapt the structure of the sparse neural network during training.

Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

no code implementations16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

In practice, however, an improvement of the validation metric may not translate in better predictive performance on a test set, especially when tuning models trained on small datasets.

Hyperparameter Optimization

Hyperparameter Transfer Learning with Adaptive Complexity

no code implementations25 Feb 2021 Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau

Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.

Decision Making Transfer Learning

BORE: Bayesian Optimization by Density-Ratio Estimation

1 code implementation17 Feb 2021 Louis C. Tiao, Aaron Klein, Matthias Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos

Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods.

Density Ratio Estimation

Model-based Asynchronous Hyperparameter and Neural Architecture Search

2 code implementations24 Mar 2020 Aaron Klein, Louis C. Tiao, Thibaut Lienart, Cedric Archambeau, Matthias Seeger

We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Hyperband and Gaussian process-based Bayesian optimization.

Hyperparameter Optimization Neural Architecture Search

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings

1 code implementation10 Oct 2019 Matilde Gargiani, Aaron Klein, Stefan Falkner, Frank Hutter

We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves.

Hyperparameter Optimization

Meta-Surrogate Benchmarking for Hyperparameter Optimization

1 code implementation NeurIPS 2019 Aaron Klein, Zhenwen Dai, Frank Hutter, Neil Lawrence, Javier Gonzalez

Despite the recent progress in hyperparameter optimization (HPO), available benchmarks that resemble real-world scenarios consist of a few and very large problem instances that are expensive to solve.

Hyperparameter Optimization

Towards Automatically-Tuned Deep Neural Networks

2 code implementations18 May 2019 Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, Matthias Urban, Michael Burkart, Maximilian Dippel, Marius Lindauer, Frank Hutter

Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks.


Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization

1 code implementation13 May 2019 Aaron Klein, Frank Hutter

Due to the high computational demands executing a rigorous comparison between hyperparameter optimization (HPO) methods is often cumbersome.

Hyperparameter Optimization

NAS-Bench-101: Towards Reproducible Neural Architecture Search

4 code implementations25 Feb 2019 Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter

Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation.

Neural Architecture Search

Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search

3 code implementations18 Jul 2018 Arber Zela, Aaron Klein, Stefan Falkner, Frank Hutter

While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal.

Neural Architecture Search

BOHB: Robust and Efficient Hyperparameter Optimization at Scale

3 code implementations ICML 2018 Stefan Falkner, Aaron Klein, Frank Hutter

Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.

Hyperparameter Optimization

Uncertainty Estimates and Multi-Hypotheses Networks for Optical Flow

1 code implementation ECCV 2018 Eddy Ilg, Özgün Çiçek, Silvio Galesso, Aaron Klein, Osama Makansi, Frank Hutter, Thomas Brox

Optical flow estimation can be formulated as an end-to-end supervised learning problem, which yields estimates with a superior accuracy-runtime tradeoff compared to alternative methodology.

Optical Flow Estimation

RoBO: A Flexible and Robust Bayesian Optimization Framework in Python

1 code implementation NIPS 2017 2017 Aaron Klein, Stefan Falkner, Numair Mansur, Frank Hutter

Bayesian optimization is a powerful approach for the global derivative-free optimization of non-convex expensive functions.

Hyperparameter Optimization

Asynchronous Stochastic Gradient MCMC with Elastic Coupling

no code implementations2 Dec 2016 Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter

We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling for problems where we can leverage (stochastic) gradients to define continuous dynamics which explore the target distribution.

Bayesian Optimization with Robust Bayesian Neural Networks

1 code implementation NeurIPS 2016 Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter

Bayesian optimization is a prominent method for optimizing expensive to evaluate black-box functions that is prominently applied to tuning the hyperparameters of machine learning algorithms.

Hyperparameter Optimization

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

1 code implementation23 May 2016 Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.

Hyperparameter Optimization

Efficient and Robust Automated Machine Learning

2 code implementations NeurIPS 2015 Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, Frank Hutter

The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts.

Hyperparameter Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.