1 code implementation • 29 Jun 2023 • Sigrid Passano Hellan, Huibin Shen, François-Xavier Aubet, David Salinas, Aaron Klein
We introduce ordered transfer hyperparameter optimisation (OTHPO), a version of transfer learning for hyperparameter optimisation (HPO) where the tasks follow a sequential order.
1 code implementation • 5 May 2023 • David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cedric Archambeau
Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on model-based optimizers that learn surrogate models of the target function to guide the search.
2 code implementations • 14 Sep 2021 • Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter
To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications.
no code implementations • 26 Aug 2021 • Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann
We could show that for 8 out of 13 subjects, the proposed approach using Bayesian optimization succeeded to select the individually optimal SOA out of multiple evaluated SOA values.
no code implementations • ICML Workshop AutoML 2021 • Julien Niklas Siems, Aaron Klein, Cedric Archambeau, Maren Mahsereci
Dynamic sparsity pruning undoes this limitation and allows to adapt the structure of the sparse neural network during training.
1 code implementation • 16 Apr 2021 • Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau
Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.
no code implementations • 25 Feb 2021 • Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau
Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.
1 code implementation • 17 Feb 2021 • Louis C. Tiao, Aaron Klein, Matthias Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos
Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods.
3 code implementations • 24 Mar 2020 • Aaron Klein, Louis C. Tiao, Thibaut Lienart, Cedric Archambeau, Matthias Seeger
We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Hyperband and Gaussian process-based Bayesian optimization.
1 code implementation • 10 Oct 2019 • Matilde Gargiani, Aaron Klein, Stefan Falkner, Frank Hutter
We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves.
1 code implementation • NeurIPS 2019 • Aaron Klein, Zhenwen Dai, Frank Hutter, Neil Lawrence, Javier Gonzalez
Despite the recent progress in hyperparameter optimization (HPO), available benchmarks that resemble real-world scenarios consist of a few and very large problem instances that are expensive to solve.
2 code implementations • 18 May 2019 • Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, Matthias Urban, Michael Burkart, Maximilian Dippel, Marius Lindauer, Frank Hutter
Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks.
1 code implementation • 13 May 2019 • Aaron Klein, Frank Hutter
Due to the high computational demands executing a rigorous comparison between hyperparameter optimization (HPO) methods is often cumbersome.
4 code implementations • 25 Feb 2019 • Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter
Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation.
3 code implementations • 18 Jul 2018 • Arber Zela, Aaron Klein, Stefan Falkner, Frank Hutter
While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal.
4 code implementations • ICML 2018 • Stefan Falkner, Aaron Klein, Frank Hutter
Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.
1 code implementation • ECCV 2018 • Eddy Ilg, Özgün Çiçek, Silvio Galesso, Aaron Klein, Osama Makansi, Frank Hutter, Thomas Brox
Optical flow estimation can be formulated as an end-to-end supervised learning problem, which yields estimates with a superior accuracy-runtime tradeoff compared to alternative methodology.
1 code implementation • NIPS 2017 2017 • Aaron Klein, Stefan Falkner, Numair Mansur, Frank Hutter
Bayesian optimization is a powerful approach for the global derivative-free optimization of non-convex expensive functions.
no code implementations • 2 Dec 2016 • Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter
We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling for problems where we can leverage (stochastic) gradients to define continuous dynamics which explore the target distribution.
1 code implementation • NeurIPS 2016 • Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter
Bayesian optimization is a prominent method for optimizing expensive to evaluate black-box functions that is prominently applied to tuning the hyperparameters of machine learning algorithms.
1 code implementation • 23 May 2016 • Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter
Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.
2 code implementations • NeurIPS 2015 • Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, Frank Hutter
The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts.
1 code implementation • NIPS 2015 2015 • Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Tobias Springenberg, Manuel Blum, Frank Hutter
Supplementary Material for Efficient and Robust Automated Machine Learning