no code implementations • 10 Jan 2024 • Alexander Mey, Rui Manuel Castro
We consider the task of identifying the causal parents of a target variable among a set of candidate variables from observational data.
1 code implementation • 3 Nov 2020 • Elena Congeduti, Alexander Mey, Frans A. Oliehoek
Sequential decision making techniques hold great promise to improve the performance of many real-world systems, but computational complexity hampers their principled application.
no code implementations • 6 Oct 2020 • Alexander Mey
Following that, statements made about the performance of machine learning models have to take the sampling process into account.
no code implementations • 7 Apr 2020 • Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax
In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.
no code implementations • 25 Nov 2019 • Tom J. Viering, Alexander Mey, Marco Loog
Learning performance can show non-monotonic behavior.
no code implementations • 30 Aug 2019 • Alexander Mey, Marco Loog
Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions.
no code implementations • 26 Aug 2019 • Alexander Mey, Marco Loog
In this review we gather results about the possible gains one can achieve when using semi-supervised learning as well as results about the limits of such methods.
1 code implementation • NeurIPS 2019 • Marco Loog, Tom Viering, Alexander Mey
Plotting a learner's average performance against the number of training samples results in a learning curve.
no code implementations • 14 Jun 2019 • Alexander Mey, Tom Viering, Marco Loog
Here, we derive sample complexity bounds based on pseudo-dimension for models that add a convex data dependent regularization term to a supervised learning process, as is in particular done in Manifold regularization.
1 code implementation • 28 May 2019 • Julius von Kügelgen, Alexander Mey, Marco Loog, Bernhard Schölkopf
While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms.
1 code implementation • 20 Jul 2018 • Julius von Kügelgen, Alexander Mey, Marco Loog
Current methods for covariate-shift adaptation use unlabelled data to compute importance weights or domain-invariant features, while the final model is trained on labelled data only.