Search Results for author: Lennart Schneider

Found 11 papers, 7 papers with code

Evaluating machine learning models in non-standard settings: An overview and new findings

no code implementations23 Oct 2023 Roman Hornung, Malte Nalenz, Lennart Schneider, Andreas Bender, Ludwig Bothmann, Bernd Bischl, Thomas Augustin, Anne-Laure Boulesteix

Our findings corroborate the concern that standard resampling methods often yield biased GE estimates in non-standard settings, underscoring the importance of tailored GE estimation.

Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc Ensemble Selection in AutoML

no code implementations17 Jul 2023 Lennart Purucker, Lennart Schneider, Marie Anastacio, Joeran Beel, Bernd Bischl, Holger Hoos

Automated machine learning (AutoML) systems commonly ensemble models post hoc to improve predictive performance, typically via greedy ensemble selection (GES).

AutoML

Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models

1 code implementation17 Jul 2023 Lennart Schneider, Bernd Bischl, Janek Thomas

Efficient optimization is achieved via augmentation of the search space of the learning algorithm by incorporating feature selection, interaction and monotonicity constraints into the hyperparameter search space.

feature selection Hyperparameter Optimization

HPO X ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis

1 code implementation30 Jul 2022 Lennart Schneider, Lennart Schäpermeier, Raphael Patrick Prager, Bernd Bischl, Heike Trautmann, Pascal Kerschke

We identify a subset of BBOB problems that are close to the HPO problems in ELA feature space and show that optimizer performance is comparably similar on these two sets of benchmark problems.

Hyperparameter Optimization

Tackling Neural Architecture Search With Quality Diversity Optimization

1 code implementation30 Jul 2022 Lennart Schneider, Florian Pfisterer, Paul Kent, Juergen Branke, Bernd Bischl, Janek Thomas

Although considerable progress has been made in the field of multi-objective NAS, we argue that there is some discrepancy between the actual optimization problem of practical interest and the optimization problem that multi-objective NAS tries to solve.

Neural Architecture Search

Automated Benchmark-Driven Design and Explanation of Hyperparameter Optimizers

1 code implementation29 Nov 2021 Julia Moosbauer, Martin Binder, Lennart Schneider, Florian Pfisterer, Marc Becker, Michel Lang, Lars Kotthoff, Bernd Bischl

Automated hyperparameter optimization (HPO) has gained great popularity and is an important ingredient of most automated machine learning frameworks.

Bayesian Optimization Hyperparameter Optimization

YAHPO Gym -- An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization

1 code implementation8 Sep 2021 Florian Pfisterer, Lennart Schneider, Julia Moosbauer, Martin Binder, Bernd Bischl

When developing and analyzing new hyperparameter optimization methods, it is vital to empirically evaluate and compare them on well-curated benchmark suites.

Hyperparameter Optimization

Mutation is all you need

no code implementations ICML Workshop AutoML 2021 Lennart Schneider, Florian Pfisterer, Martin Binder, Bernd Bischl

Neural architecture search (NAS) promises to make deep learning accessible to non-experts by automating architecture engineering of deep neural networks.

Bayesian Optimization Neural Architecture Search

Model Selection of Nested and Non-Nested Item Response Models using Vuong Tests

1 code implementation10 Oct 2018 Lennart Schneider, R. Philip Chalmers, Rudolf Debelak, Edgar C. Merkle

Vuong's approach of model selection is useful because it allows for formal statistical tests of both nested and non-nested models.

Applications

Cannot find the paper you are looking for? You can Submit a new open access paper.