no code implementations • 5 Apr 2024 • Romain Egele, Felix Mohr, Tom Viering, Prasanna Balaprakash
To reach high performance with deep learning, hyperparameter optimization (HPO) is essential.
no code implementations • 25 Nov 2022 • Marco Loog, Tom Viering
Plotting a learner's generalization performance against the training set size results in a so-called learning curve.
1 code implementation • 19 Mar 2021 • Tom Viering, Marco Loog
This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning.
no code implementations • 7 Apr 2020 • Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax
In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.
no code implementations • 25 Jul 2019 • Tom Viering, Ziqi Wang, Marco Loog, Elmar Eisemann
This illustrates that GradCAM cannot explain the decision of every CNN and provides a proof of concept showing that it is possible to obfuscate the inner workings of a CNN.
1 code implementation • NeurIPS 2019 • Marco Loog, Tom Viering, Alexander Mey
Plotting a learner's average performance against the number of training samples results in a learning curve.
no code implementations • 14 Jun 2019 • Alexander Mey, Tom Viering, Marco Loog
Here, we derive sample complexity bounds based on pseudo-dimension for models that add a convex data dependent regularization term to a supervised learning process, as is in particular done in Manifold regularization.