1 code implementation • 27 Sep 2024 • Jannis Becktepe, Julian Dierkes, Carolin Benjamins, Aditya Mohan, David Salinas, Raghu Rajan, Frank Hutter, Holger Hoos, Marius Lindauer, Theresa Eimer
With the extensive and large-scale dataset on hyperparameter landscapes that our selection is based on, ARLBench is an efficient, flexible, and future-oriented foundation for research on AutoRL.
1 code implementation • 18 Jul 2024 • Carolin Benjamins, Gjorgjina Cenikj, Ana Nikolikj, Aditya Mohan, Tome Eftimov, Marius Lindauer
Dynamic Algorithm Configuration (DAC) addresses the challenge of dynamically setting hyperparameters of an algorithm for a diverse set of instances rather than focusing solely on individual tasks.
no code implementations • 7 Jun 2024 • Difan Deng, Marius Lindauer
In this work, we propose a novel hierarchical neural architecture search approach for time series forecasting tasks.
no code implementations • 5 Jun 2024 • Marius Lindauer, Florian Karl, Anne Klier, Julia Moosbauer, Alexander Tornede, Andreas Mueller, Frank Hutter, Matthias Feurer, Bernd Bischl
Automated machine learning (AutoML) was formed around the fundamental objectives of automatically and efficiently configuring machine learning (ML) workflows, aiding the research of new ML algorithms, and contributing to the democratization of ML by making it accessible to a broader audience.
1 code implementation • 13 May 2024 • Daphne Theodorakopoulos, Frederic Stahl, Marius Lindauer
Our findings not only offer valuable guidance for hyperparameter tuning in multi-objective optimization tasks but also contribute to advancing the understanding of hyperparameter importance in complex optimization scenarios.
no code implementations • 2 Apr 2024 • Leona Hennig, Tanja Tornede, Marius Lindauer
Experimental results demonstrate the effectiveness of our approach, resulting in models with over 80\% in accuracy and low computational cost.
1 code implementation • 13 Dec 2023 • Marc-André Zöller, Marius Lindauer, Marco F. Huber
To meet the growing demand for efficient forecasting, we introduce auto-sktime, a novel framework for automated time series forecasting.
1 code implementation • 7 Sep 2023 • Joseph Giovanelli, Alexander Tornede, Tanja Tornede, Marius Lindauer
In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.
1 code implementation • 29 Jun 2023 • Felix Neutatz, Marius Lindauer, Ziawasch Abedjan
In this paper, we propose CAML, which uses meta-learning to automatically adapt its own AutoML parameters, such as the search strategy, the validation strategy, and the search space, for a task at hand.
no code implementations • 28 Jun 2023 • Aditya Mohan, Amy Zhang, Marius Lindauer
We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure.
2 code implementations • NeurIPS 2023 • Neeratyoy Mallik, Edward Bergman, Carl Hvarfner, Danny Stoll, Maciej Janowski, Marius Lindauer, Luigi Nardi, Frank Hutter
Hyperparameters of Deep Learning (DL) pipelines are crucial for their downstream performance.
no code implementations • 21 Jun 2023 • Marc-André Zöller, Fabian Mauthe, Peter Zeiler, Marius Lindauer, Marco F. Huber
Recently, data-driven approaches to RUL predictions are becoming prevalent over model-based approaches since no underlying physical knowledge of the engineering system is required.
no code implementations • 13 Jun 2023 • Alexander Tornede, Difan Deng, Theresa Eimer, Joseph Giovanelli, Aditya Mohan, Tim Ruhkopf, Sarah Segel, Daphne Theodorakopoulos, Tanja Tornede, Henning Wachsmuth, Marius Lindauer
The fields of both Natural Language Processing (NLP) and Automated Machine Learning (AutoML) have achieved remarkable results over the past years.
1 code implementation • 7 Jun 2023 • Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer
Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.
1 code implementation • 2 Jun 2023 • Theresa Eimer, Marius Lindauer, Roberta Raileanu
In order to improve reproducibility, deep reinforcement learning (RL) has been adopting better scientific practices such as standardized evaluation metrics and reporting.
1 code implementation • 18 May 2023 • Mohammad Loni, Aditya Mohan, Mehdi Asadi, Marius Lindauer
By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15. 53%, 8. 88%, and 6. 33% absolute improvement in the accuracy for LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios.
1 code implementation • 5 Apr 2023 • Aditya Mohan, Carolin Benjamins, Konrad Wienecke, Alexander Dockhorn, Marius Lindauer
Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN, PPO, and SAC) in different kinds of environments (Cartpole, Bipedal Walker, and Hopper) This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.
Hyperparameter Optimization
Open-Ended Question Answering
+1
1 code implementation • 21 Dec 2022 • Theresa Eimer, Carolin Benjamins, Marius Lindauer
Although Reinforcement Learning (RL) has shown impressive results in games and simulation, real-world application of RL suffers from its instability under changing environment conditions and hyperparameters.
1 code implementation • 17 Nov 2022 • Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr
Bayesian optimization (BO) algorithms form a class of surrogate-based heuristics, aimed at efficiently computing high-quality solutions for numerical black-box optimization problems.
1 code implementation • 2 Nov 2022 • Carolin Benjamins, Elena Raponi, Anja Jankovic, Koen van der Blom, Maria Laura Santoni, Marius Lindauer, Carola Doerr
We also compare this to a random schedule and round-robin selection of EI and PI.
1 code implementation • 11 Jun 2022 • Julia Moosbauer, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves.
2 code implementations • 7 Jun 2022 • René Sass, Eddie Bergman, André Biedenkapp, Frank Hutter, Marius Lindauer
Automated Machine Learning (AutoML) is used more than ever before to support users in determining efficient hyperparameters, neural architectures, or even full machine learning pipelines.
no code implementations • 7 Jun 2022 • Aditya Mohan, Tim Ruhkopf, Marius Lindauer
Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to.
1 code implementation • 27 May 2022 • Steven Adriaensen, André Biedenkapp, Gresa Shala, Noor Awad, Theresa Eimer, Marius Lindauer, Frank Hutter
The performance of an algorithm often critically depends on its parameter configuration.
no code implementations • 23 May 2022 • Frederik Schubert, Carolin Benjamins, Sebastian Döhler, Bodo Rosenhahn, Marius Lindauer
The goal of Unsupervised Reinforcement Learning (URL) is to find a reward-agnostic prior policy on a task domain, such that the sample-efficiency on supervised downstream tasks is improved.
1 code implementation • 11 May 2022 • Difan Deng, Florian Karl, Frank Hutter, Bernd Bischl, Marius Lindauer
In contrast to common NAS search spaces, we designed a novel neural architecture search space covering various state-of-the-art architectures, allowing for an efficient macro-search over different DL approaches.
1 code implementation • 23 Apr 2022 • Carl Hvarfner, Danny Stoll, Artur Souza, Marius Lindauer, Frank Hutter, Luigi Nardi
To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user.
no code implementations • 3 Mar 2022 • Niklas Hasebrook, Felix Morsbach, Niclas Kannengießer, Marc Zöller, Jörg Franke, Marius Lindauer, Frank Hutter, Ali Sunyaev
Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models.
1 code implementation • 9 Feb 2022 • Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer
While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes.
no code implementations • 11 Jan 2022 • Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer
The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents.
no code implementations • 11 Jan 2022 • Zhengying Liu, Adrien Pavao, Zhen Xu, Sergio Escalera, Fabio Ferreira, Isabelle Guyon, Sirui Hong, Frank Hutter, Rongrong Ji, Julio C. S. Jacques Junior, Ge Li, Marius Lindauer, Zhipeng Luo, Meysam Madadi, Thomas Nierhoff, Kangning Niu, Chunguang Pan, Danny Stoll, Sebastien Treguer, Jin Wang, Peng Wang, Chenglin Wu, Youcheng Xiong, Arbe r Zela, Yang Zhang
Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly.
no code implementations • 10 Nov 2021 • Difan Deng, Marius Lindauer
Because of its sample efficiency, Bayesian optimization (BO) has become a popular approach dealing with expensive black-box optimization problems, such as hyperparameter optimization (HPO).
1 code implementation • NeurIPS 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
1 code implementation • 5 Oct 2021 • Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer
While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment.
no code implementations • ICLR 2022 • Carl Hvarfner, Danny Stoll, Artur Souza, Luigi Nardi, Marius Lindauer, Frank Hutter
To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user.
1 code implementation • 20 Sep 2021 • Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhopf, René Sass, Frank Hutter
Algorithm parameters, in particular hyperparameters of machine learning algorithms, can substantially impact their performance.
2 code implementations • 14 Sep 2021 • Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter
To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications.
no code implementations • 28 Jul 2021 • Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl
It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).
no code implementations • 13 Jul 2021 • Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, Marius Lindauer
Most machine learning algorithms are configured by one or several hyperparameters that must be carefully chosen and often considerably impact performance.
no code implementations • NeurIPS 2021 • Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka
Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures.
no code implementations • 11 Jun 2021 • Frederik Schubert, Theresa Eimer, Bodo Rosenhahn, Marius Lindauer
The use of Reinforcement Learning (RL) agents in practical applications requires the consideration of suboptimal outcomes, depending on the familiarity of the agent with its environment.
Distributional Reinforcement Learning
reinforcement-learning
+2
1 code implementation • 9 Jun 2021 • Theresa Eimer, André Biedenkapp, Frank Hutter, Marius Lindauer
Reinforcement learning (RL) has made a lot of advances for solving a single problem in a given environment; but learning policies that generalize to unseen variations of a problem remains challenging.
1 code implementation • 9 Jun 2021 • André Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer
Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment.
no code implementations • ICML Workshop AutoML 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
1 code implementation • 18 May 2021 • Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriaensen, Frank Hutter, Marius Lindauer
Dynamic Algorithm Configuration (DAC) aims to dynamically control a target algorithm's hyperparameters in order to improve its performance.
1 code implementation • ICML Workshop AutoML 2021 • Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, Frank Hutter
Neural architecture search (NAS) and hyperparameter optimization (HPO) make deep learning accessible to non-experts by automatically finding the architecture of the deep neural network to use and tuning the hyperparameters of the used training pipeline.
no code implementations • 1 Jan 2021 • Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka
The regularization of prediction models is arguably the most crucial ingredient that allows Machine Learning solutions to generalize well on unseen data.
1 code implementation • 15 Dec 2020 • Noor Awad, Gresa Shala, Difan Deng, Neeratyoy Mallik, Matthias Feurer, Katharina Eggensperger, Andre' Biedenkapp, Diederick Vermetten, Hao Wang, Carola Doerr, Marius Lindauer, Frank Hutter
In this short note, we describe our submission to the NeurIPS 2020 BBO challenge.
no code implementations • 29 Sep 2020 • Katharina Eggensperger, Kai Haase, Philipp Müller, Marius Lindauer, Frank Hutter
When fitting a regression model to predict the distribution of the outcomes, we cannot simply drop these right-censored observations, but need to properly model them.
no code implementations • 28 Sep 2020 • Artur Souza, Luigi Nardi, Leonardo Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter
While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts.
4 code implementations • 8 Jul 2020 • Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter
Automated Machine Learning (AutoML) supports practitioners and researchers with the tedious task of designing machine learning pipelines and has recently achieved substantial success.
no code implementations • 25 Jun 2020 • Artur Souza, Luigi Nardi, Leonardo B. Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter
We show that BOPrO is around 6. 67x faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application.
2 code implementations • 24 Jun 2020 • Lucas Zimmer, Marius Lindauer, Frank Hutter
While early AutoML frameworks focused on optimizing traditional ML pipelines and their hyperparameters, a recent trend in AutoML is to focus on neural architecture search.
1 code implementation • 15 Jun 2020 • David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller, Marius Lindauer
We show that dynamic algorithm configuration can be used for dynamic heuristic selection which takes into account the internal search dynamics of a planning system.
1 code implementation • 1 Jun 2020 • André Biedenkapp, H. Furkan Bozkurt, Theresa Eimer, Frank Hutter, Marius Lindauer
The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on parameter tuning.
no code implementations • 5 Sep 2019 • Marius Lindauer, Frank Hutter
Finding a well-performing architecture is often tedious for both DL practitioners and researchers, leading to tremendous interest in the automation of this task by means of neural architecture search (NAS).
no code implementations • 19 Aug 2019 • Marius Lindauer, Matthias Feurer, Katharina Eggensperger, André Biedenkapp, Frank Hutter
Bayesian Optimization (BO) is a common approach for hyperparameter optimization (HPO) in automated machine learning.
1 code implementation • 16 Aug 2019 • Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Joshua Marben, Philipp Müller, Frank Hutter
Hyperparameter optimization and neural architecture search can become prohibitively expensive for regular black-box Bayesian optimization because the training and evaluation of a single model can easily take several hours.
no code implementations • 18 Jun 2019 • André Biedenkapp, H. Furkan Bozkurt, Frank Hutter, Marius Lindauer
The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on tuned hyperparameter configurations.
2 code implementations • 18 May 2019 • Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, Matthias Urban, Michael Burkart, Maximilian Dippel, Marius Lindauer, Frank Hutter
Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks.
no code implementations • 3 May 2018 • Marius Lindauer, Jan N. van Rijn, Lars Kotthoff
The algorithm selection problem is to choose the most suitable algorithm for solving a given problem instance.
no code implementations • 22 Sep 2017 • Katharina Eggensperger, Marius Lindauer, Frank Hutter
Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance.
no code implementations • 14 Sep 2017 • Marius Lindauer, Frank Hutter
The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem.
2 code implementations • 17 May 2017 • Katharina Eggensperger, Marius Lindauer, Frank Hutter
Good parameter settings are crucial to achieve high performance in many areas of artificial intelligence (AI), such as propositional satisfiability solving, AI planning, scheduling, and machine learning (in particular deep learning).
no code implementations • 30 Mar 2017 • Katharina Eggensperger, Marius Lindauer, Holger H. Hoos, Frank Hutter, Kevin Leyton-Brown
In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures.
no code implementations • 2 Sep 2016 • Markus Wagner, Marius Lindauer, Mustafa Misir, Samadhi Nallaperuma, Frank Hutter
Many real-world problems are composed of several interacting components.
2 code implementations • 8 Jun 2015 • Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Frechette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, Joaquin Vanschoren
To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature.
no code implementations • 5 May 2015 • Frank Hutter, Marius Lindauer, Adrian Balint, Sam Bayless, Holger Hoos, Kevin Leyton-Brown
It is well known that different solution strategies work well for different types of instances of hard combinatorial problems.
no code implementations • 7 May 2014 • Holger Hoos, Marius Lindauer, Torsten Schaub
The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques.
no code implementations • 6 Jan 2014 • Holger Hoos, Roland Kaminski, Marius Lindauer, Torsten Schaub
Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances and is known to depend strongly on algorithm parameters.