no code implementations • 2 Dec 2023 • Wouter A. C. van Amsterdam, Nan van Geloven, Jesse H. Krijthe, Rajesh Ranganath, Giovanni Ciná
These models are harmful self-fulfilling prophecies: their deployment harms a group of patients but the worse outcome of these patients does not invalidate the predictive power of the model.
1 code implementation • NeurIPS 2023 • Rickard K. A. Karlsson, Jesse H. Krijthe
Under the assumption of independent causal mechanisms underlying the data-generating process, we demonstrate a way to detect unobserved confounders when having multiple observational datasets coming from different environments.
1 code implementation • 1 Dec 2020 • Burak Yildiz, Hayley Hung, Jesse H. Krijthe, Cynthia C. S. Liem, Marco Loog, Gosia Migut, Frans Oliehoek, Annibale Panichella, Przemyslaw Pawelczak, Stjepan Picek, Mathijs de Weerdt, Jan van Gemert
We present ReproducedPapers. org: an open online repository for teaching and structuring machine learning reproducibility.
no code implementations • 7 Apr 2020 • Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax
In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.
1 code implementation • 17 Oct 2017 • Wouter M. Kouw, Jesse H. Krijthe, Marco Loog
Cross-validation under sample selection bias can, in principle, be done by importance-weighting the empirical risk.
no code implementations • 13 Jul 2017 • Marco Loog, Jesse H. Krijthe, Are C. Jensen
In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies.
no code implementations • 8 Jun 2017 • Tom J. Viering, Jesse H. Krijthe, Marco Loog
In particular we show the relation between the bound of the state-of-the-art Maximum Mean Discrepancy (MMD) active learner, the bound of the Discrepancy, and a new and looser bound that we refer to as the Nuclear Discrepancy bound.
no code implementations • NeurIPS 2018 • Jesse H. Krijthe, Marco Loog
Consider a classification problem where we have both labeled and unlabeled data available.
no code implementations • 27 Dec 2016 • Jesse H. Krijthe, Marco Loog
In this paper, we discuss the approaches we took and trade-offs involved in making a paper on a conceptual topic in pattern recognition research fully reproducible.
2 code implementations • 23 Dec 2016 • Jesse H. Krijthe
In this paper, we introduce a package for semi-supervised learning research in the R programming language called RSSL.
no code implementations • 17 Oct 2016 • Jesse H. Krijthe, Marco Loog
For the supervised least squares classifier, when the number of training objects is smaller than the dimensionality of the data, adding more data to the training set may first increase the error rate before decreasing it.
no code implementations • 12 Oct 2016 • Jesse H. Krijthe, Marco Loog
The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples.
no code implementations • 25 Feb 2016 • Jesse H. Krijthe, Marco Loog
For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts.
no code implementations • 27 Dec 2015 • Jesse H. Krijthe, Marco Loog
Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier, as well as in terms of the expected classification error.
no code implementations • 15 Dec 2015 • Wouter M. Kouw, Jesse H. Krijthe, Marco Loog, Laurens J. P. van der Maaten
Our empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain.
no code implementations • 24 Jul 2015 • Jesse H. Krijthe, Marco Loog
We introduce a novel semi-supervised version of the least squares classifier.
no code implementations • 17 Nov 2014 • Jesse H. Krijthe, Marco Loog
Using any one of these methods is not guaranteed to outperform the supervised classifier which does not take the additional unlabeled data into account.