Search Results for author: Stefan Falkner

Found 14 papers, 10 papers with code

Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning

4 code implementations8 Jul 2020 Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter

Automated Machine Learning (AutoML) supports practitioners and researchers with the tedious task of designing machine learning pipelines and has recently achieved substantial success.

AutoML BIG-bench Machine Learning +1

BOHB: Robust and Efficient Hyperparameter Optimization at Scale

4 code implementations ICML 2018 Stefan Falkner, Aaron Klein, Frank Hutter

Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.

Bayesian Optimization Hyperparameter Optimization

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

1 code implementation23 May 2016 Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.

Bayesian Optimization BIG-bench Machine Learning +1

Bayesian Optimization with Robust Bayesian Neural Networks

1 code implementation NeurIPS 2016 Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter

Bayesian optimization is a prominent method for optimizing expensive to evaluate black-box functions that is prominently applied to tuning the hyperparameters of machine learning algorithms.

Bayesian Optimization Hyperparameter Optimization +1

Learning to Design RNA

5 code implementations ICLR 2019 Frederic Runge, Danny Stoll, Stefan Falkner, Frank Hutter

Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.

Meta-Learning

Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search

3 code implementations18 Jul 2018 Arber Zela, Aaron Klein, Stefan Falkner, Frank Hutter

While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal.

Bayesian Optimization Neural Architecture Search

Shape your Space: A Gaussian Mixture Regularization Approach to Deterministic Autoencoders

1 code implementation NeurIPS 2021 Amrutha Saseendran, Kathrin Skubch, Stefan Falkner, Margret Keuper

In this paper, we propose a simple and end-to-end trainable deterministic autoencoding framework, that efficiently shapes the latent space of the model during training and utilizes the capacity of expressive multi-modal latent distributions.

Density Estimation

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings

1 code implementation10 Oct 2019 Matilde Gargiani, Aaron Klein, Stefan Falkner, Frank Hutter

We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves.

BIG-bench Machine Learning Hyperparameter Optimization

Asynchronous Stochastic Gradient MCMC with Elastic Coupling

no code implementations2 Dec 2016 Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter

We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling for problems where we can leverage (stochastic) gradients to define continuous dynamics which explore the target distribution.

Probabilistic Meta-Learning for Bayesian Optimization

no code implementations1 Jan 2021 Felix Berkenkamp, Anna Eivazi, Lukas Grossberger, Kathrin Skubch, Jonathan Spitz, Christian Daniel, Stefan Falkner

Transfer and meta-learning algorithms leverage evaluations on related tasks in order to significantly speed up learning or optimization on a new problem.

Bayesian Optimization Meta-Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.