Search Results for author: Jesse H. Krijthe

Found 15 papers, 3 papers with code

A Brief Prehistory of Double Descent

no code implementations7 Apr 2020 Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax

In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.

Robust importance-weighted cross-validation under sample selection bias

1 code implementation17 Oct 2017 Wouter M. Kouw, Jesse H. Krijthe, Marco Loog

Cross-validation under sample selection bias can, in principle, be done by importance-weighting the empirical risk.

General Classification Selection bias

On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL

no code implementations13 Jul 2017 Marco Loog, Jesse H. Krijthe, Are C. Jensen

In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies.

Active Learning Domain Adaptation +1

Nuclear Discrepancy for Active Learning

no code implementations8 Jun 2017 Tom J. Viering, Jesse H. Krijthe, Marco Loog

In particular we show the relation between the bound of the state-of-the-art Maximum Mean Discrepancy (MMD) active learner, the bound of the Discrepancy, and a new and looser bound that we refer to as the Nuclear Discrepancy bound.

Active Learning Generalization Bounds

Reproducible Pattern Recognition Research: The Case of Optimistic SSL

no code implementations27 Dec 2016 Jesse H. Krijthe, Marco Loog

In this paper, we discuss the approaches we took and trade-offs involved in making a paper on a conceptual topic in pattern recognition research fully reproducible.

RSSL: Semi-supervised Learning in R

2 code implementations23 Dec 2016 Jesse H. Krijthe

In this paper, we introduce a package for semi-supervised learning research in the R programming language called RSSL.

The Peaking Phenomenon in Semi-supervised Learning

no code implementations17 Oct 2016 Jesse H. Krijthe, Marco Loog

For the supervised least squares classifier, when the number of training objects is smaller than the dimensionality of the data, adding more data to the training set may first increase the error rate before decreasing it.

Optimistic Semi-supervised Least Squares Classification

no code implementations12 Oct 2016 Jesse H. Krijthe, Marco Loog

The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples.

Classification General Classification

Projected Estimators for Robust Semi-supervised Classification

no code implementations25 Feb 2016 Jesse H. Krijthe, Marco Loog

For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts.

Classification General Classification

Robust Semi-supervised Least Squares Classification by Implicit Constraints

no code implementations27 Dec 2015 Jesse H. Krijthe, Marco Loog

Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier, as well as in terms of the expected classification error.

Classification General Classification

Feature-Level Domain Adaptation

no code implementations15 Dec 2015 Wouter M. Kouw, Jesse H. Krijthe, Marco Loog, Laurens J. P. van der Maaten

Our empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain.

Domain Adaptation

Implicitly Constrained Semi-Supervised Linear Discriminant Analysis

no code implementations17 Nov 2014 Jesse H. Krijthe, Marco Loog

Using any one of these methods is not guaranteed to outperform the supervised classifier which does not take the additional unlabeled data into account.

Cannot find the paper you are looking for? You can Submit a new open access paper.