1 code implementation • 7 Mar 2020 • Moritz Herrmann, Philipp Probst, Roman Hornung, Vindi Jurinovic, Anne-Laure Boulesteix
The Kaplan-Meier estimate and a Cox model using only clinical variables were used as reference methods.
no code implementations • 23 Nov 2018 • Florian Pfisterer, Jan N. van Rijn, Philipp Probst, Andreas Müller, Bernd Bischl
The performance of modern machine learning methods highly depends on their hyperparameter configurations.
no code implementations • 28 Jun 2018 • Daniel Kühn, Philipp Probst, Janek Thomas, Bernd Bischl
Understanding the influence of hyperparameters on the performance of a machine learning algorithm is an important scientific topic in itself and can help to improve automatic hyperparameter tuning procedures.
1 code implementation • 10 Apr 2018 • Philipp Probst, Marvin Wright, Anne-Laure Boulesteix
In a benchmark study on several datasets, we compare the prediction performance and runtime of tuneRanger with other tuning implementations in R and RF with default hyperparameters.
2 code implementations • 26 Feb 2018 • Philipp Probst, Bernd Bischl, Anne-Laure Boulesteix
Firstly, we formalize the problem of tuning from a statistical point of view, define data-based defaults and suggest general measures quantifying the tunability of hyperparameters of algorithms.
1 code implementation • 16 May 2017 • Philipp Probst, Anne-Laure Boulesteix
The number of trees T in the random forest (RF) algorithm for supervised learning has to be set by the user.
1 code implementation • 27 Mar 2017 • Philipp Probst, Quay Au, Giuseppe Casalicchio, Clemens Stachl, Bernd Bischl
We implemented several multilabel classification algorithms in the machine learning package mlr.
no code implementations • 18 Sep 2016 • Julia Schiffner, Bernd Bischl, Michel Lang, Jakob Richter, Zachary M. Jones, Philipp Probst, Florian Pfisterer, Mason Gallo, Dominik Kirchhoff, Tobias Kühn, Janek Thomas, Lars Kotthoff
This document provides and in-depth introduction to the mlr framework for machine learning experiments in R.