Search Results for author: Philipp Probst

Found 8 papers, 5 papers with code

Automatic Exploration of Machine Learning Experiments on OpenML

no code implementations28 Jun 2018 Daniel Kühn, Philipp Probst, Janek Thomas, Bernd Bischl

Understanding the influence of hyperparameters on the performance of a machine learning algorithm is an important scientific topic in itself and can help to improve automatic hyperparameter tuning procedures.

BIG-bench Machine Learning

Hyperparameters and Tuning Strategies for Random Forest

1 code implementation10 Apr 2018 Philipp Probst, Marvin Wright, Anne-Laure Boulesteix

In a benchmark study on several datasets, we compare the prediction performance and runtime of tuneRanger with other tuning implementations in R and RF with default hyperparameters.

Tunability: Importance of Hyperparameters of Machine Learning Algorithms

2 code implementations26 Feb 2018 Philipp Probst, Bernd Bischl, Anne-Laure Boulesteix

Firstly, we formalize the problem of tuning from a statistical point of view, define data-based defaults and suggest general measures quantifying the tunability of hyperparameters of algorithms.

Benchmarking BIG-bench Machine Learning

To tune or not to tune the number of trees in random forest?

1 code implementation16 May 2017 Philipp Probst, Anne-Laure Boulesteix

The number of trees T in the random forest (RF) algorithm for supervised learning has to be set by the user.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.