1 code implementation • 17 Jul 2023 • Lennart Schneider, Bernd Bischl, Janek Thomas
Efficient optimization is achieved via augmentation of the search space of the learning algorithm by incorporating feature selection, interaction and monotonicity constraints into the hyperparameter search space.
no code implementations • 8 May 2023 • Noor Awad, Ayushi Sharma, Philipp Muller, Janek Thomas, Frank Hutter
Hyperparameter optimization (HPO) is a powerful technique for automating the tuning of machine learning (ML) models.
1 code implementation • 30 Jul 2022 • Lennart Schneider, Florian Pfisterer, Paul Kent, Juergen Branke, Bernd Bischl, Janek Thomas
Although considerable progress has been made in the field of multi-objective NAS, we argue that there is some discrepancy between the actual optimization problem of practical interest and the optimization problem that multi-objective NAS tries to solve.
2 code implementations • 25 Jul 2022 • Pieter Gijsbers, Marcos L. P. Bueno, Stefan Coors, Erin LeDell, Sébastien Poirier, Janek Thomas, Bernd Bischl, Joaquin Vanschoren
Comparing different AutoML frameworks is notoriously challenging and often done incorrectly.
no code implementations • 15 Jun 2022 • Florian Karl, Tobias Pielok, Julia Moosbauer, Florian Pfisterer, Stefan Coors, Martin Binder, Lennart Schneider, Janek Thomas, Jakob Richter, Michel Lang, Eduardo C. Garrido-Merchán, Juergen Branke, Bernd Bischl
Hyperparameter optimization constitutes a large part of typical modern machine learning workflows.
1 code implementation • 28 Apr 2022 • Lennart Schneider, Florian Pfisterer, Janek Thomas, Bernd Bischl
The goal of Quality Diversity Optimization is to generate a collection of diverse yet high-performing solutions to a given problem at hand.
no code implementations • 13 Jul 2021 • Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, Marius Lindauer
Most machine learning algorithms are configured by one or several hyperparameters that must be carefully chosen and often considerably impact performance.
2 code implementations • 1 Apr 2021 • Florian Pargent, Florian Pfisterer, Janek Thomas, Bernd Bischl
Since most machine learning (ML) algorithms are designed for numerical inputs, efficiently encoding categorical variables is a crucial aspect in data analysis.
1 code implementation • 6 Feb 2021 • Jann Goschenhofer, Rasmus Hvingelby, David Rügamer, Janek Thomas, Moritz Wagner, Bernd Bischl
Based on these adaptations, we explore the potential of deep semi-supervised learning in the context of time series classification by evaluating our methods on large public time series classification problems with varying amounts of labelled samples.
no code implementations • 30 Dec 2019 • Martin Binder, Julia Moosbauer, Janek Thomas, Bernd Bischl
While model-based optimization needs fewer objective evaluations to achieve good performance, it incurs computational overhead compared to the NSGA-II, so the preferred choice depends on the cost of evaluating a model on given data.
no code implementations • 6 Nov 2019 • Florian Pfisterer, Janek Thomas, Bernd Bischl
Building models from data is an integral part of the majority of data science workflows.
no code implementations • 28 Aug 2019 • Florian Pfisterer, Stefan Coors, Janek Thomas, Bernd Bischl
AutoML systems are currently rising in popularity, as they can build powerful models without human oversight.
no code implementations • 1 Jul 2019 • Pieter Gijsbers, Erin LeDell, Janek Thomas, Sébastien Poirier, Bernd Bischl, Joaquin Vanschoren
In recent years, an active field of research has developed around automated machine learning (AutoML).
no code implementations • 24 Apr 2019 • Jann Goschenhofer, Franz MJ Pfister, Kamer Ali Yuksel, Bernd Bischl, Urban Fietzek, Janek Thomas
To solve the problem of limited availability of high quality training data, we propose a transfer learning technique which helps to improve model performance substantially.
3 code implementations • 10 Jul 2018 • Janek Thomas, Stefan Coors, Bernd Bischl
Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference.
no code implementations • 28 Jun 2018 • Daniel Kühn, Philipp Probst, Janek Thomas, Bernd Bischl
Understanding the influence of hyperparameters on the performance of a machine learning algorithm is an important scientific topic in itself and can help to improve automatic hyperparameter tuning procedures.
4 code implementations • 9 Mar 2017 • Bernd Bischl, Jakob Richter, Jakob Bossek, Daniel Horn, Janek Thomas, Michel Lang
We present mlrMBO, a flexible and comprehensive R toolbox for model-based optimization (MBO), also known as Bayesian optimization, which addresses the problem of expensive black-box optimization by approximating the given objective function through a surrogate regression model.
no code implementations • 15 Feb 2017 • Janek Thomas, Tobias Hepp, Andreas Mayr, Bernd Bischl
We present a new variable selection method based on model-based gradient boosting and randomly permuted variables.
1 code implementation • 30 Nov 2016 • Janek Thomas, Andreas Mayr, Bernd Bischl, Matthias Schmid, Adam Smith, Benjamin Hofner
We apply this new algorithm to a study to estimate abundance of common eider in Massachusetts, USA, featuring excess zeros, overdispersion, non-linearity and spatio-temporal structures.
no code implementations • 18 Sep 2016 • Julia Schiffner, Bernd Bischl, Michel Lang, Jakob Richter, Zachary M. Jones, Philipp Probst, Florian Pfisterer, Mason Gallo, Dominik Kirchhoff, Tobias Kühn, Janek Thomas, Lars Kotthoff
This document provides and in-depth introduction to the mlr framework for machine learning experiments in R.