no code implementations • 16 Sep 2019 • Dimitrios Sarigiannis, Thomas Parnell, Haris Pozidis
In this work, we propose a novel sampling distribution as an alternative to uniform sampling and prove theoretically that it has a better chance of finding the best configuration in a worst-case setting.
no code implementations • 16 Sep 2019 • Johanna Sommer, Dimitrios Sarigiannis, Thomas Parnell
In this short paper we investigate whether meta-learning techniques can be used to more effectively tune the hyperparameters of machine learning models using successive halving (SH).
no code implementations • NeurIPS 2018 • Celestine Dünner, Thomas Parnell, Dimitrios Sarigiannis, Nikolas Ioannou, Andreea Anghel, Gummadi Ravi, Madhusudanan Kandasamy, Haralampos Pozidis
We describe a new software framework for fast training of generalized linear models.