You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 28 Jul 2021 • Chirag Raman, Hayley Hung, Marco Loog

In this work, we take the first step in the direction of a bottom-up self-supervised approach in the domain.

no code implementations • 7 Jun 2021 • Silvia L. Pintea, Nergis Tomen, Stanley F. Goes, Marco Loog, Jan C. van Gemert

We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters.

1 code implementation • 19 Mar 2021 • Tom Viering, Marco Loog

This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning.

no code implementations • 3 Feb 2021 • Marco Loog

Importance weighting is widely applicable in machine learning in general and in techniques dealing with data covariate shift problems in particular.

1 code implementation • 1 Dec 2020 • Burak Yildiz, Hayley Hung, Jesse H. Krijthe, Cynthia C. S. Liem, Marco Loog, Gosia Migut, Frans Oliehoek, Annibale Panichella, Przemyslaw Pawelczak, Stjepan Picek, Mathijs de Weerdt, Jan van Gemert

We present ReproducedPapers. org: an open online repository for teaching and structuring machine learning reproducibility.

no code implementations • 15 Oct 2020 • Ziqi Wang, Marco Loog, Jan van Gemert

In this work, we define DIRs employed by existing works in probabilistic terms and show that by learning DIRs, overly strict requirements are imposed concerning the invariance.

1 code implementation • 13 Aug 2020 • Kanav Anand, Ziqi Wang, Marco Loog, Jan van Gemert

Our study investigates the subjective human factor in comparisons of state of the art results and scientific reproducibility in deep learning.

no code implementations • 7 Apr 2020 • Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax

In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.

no code implementations • 25 Nov 2019 • Tom J. Viering, Alexander Mey, Marco Loog

Learning performance can show non-monotonic behavior.

no code implementations • 30 Aug 2019 • Alexander Mey, Marco Loog

Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions.

no code implementations • 26 Aug 2019 • Alexander Mey, Marco Loog

In this review we gather results about the possible gains one can achieve when using semi-supervised learning as well as results about the limits of such methods.

no code implementations • 25 Jul 2019 • Tom Viering, Ziqi Wang, Marco Loog, Elmar Eisemann

This illustrates that GradCAM cannot explain the decision of every CNN and provides a proof of concept showing that it is possible to obfuscate the inner workings of a CNN.

1 code implementation • NeurIPS 2019 • Marco Loog, Tom Viering, Alexander Mey

Plotting a learner's average performance against the number of training samples results in a learning curve.

no code implementations • 14 Jun 2019 • Alexander Mey, Tom Viering, Marco Loog

Here, we derive sample complexity bounds based on pseudo-dimension for models that add a convex data dependent regularization term to a supervised learning process, as is in particular done in Manifold regularization.

1 code implementation • 28 May 2019 • Julius von Kügelgen, Alexander Mey, Marco Loog, Bernhard Schölkopf

While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms.

1 code implementation • 16 Jan 2019 • Wouter M. Kouw, Marco Loog

Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure.

no code implementations • 31 Dec 2018 • Wouter M. Kouw, Marco Loog

Domain adaptation and transfer learning are sub-fields within machine learning that are concerned with accounting for these types of changes.

1 code implementation • 17 Oct 2018 • Wouter M. Kouw, Marco Loog, Wilbert Bartels, Adriënne M. Mendrik

Generalization of voxelwise classifiers is hampered by differences between MRI-scanners, e. g. different acquisition protocols and field strengths.

no code implementations • 28 Aug 2018 • Lex Razoux Schultz, Marco Loog, Peyman Mohajerin Esfahani

The performance of the proposed methodology is validated through an SC case study in which our numerical experiments suggest a significant improvement in the cross domain classification error in comparison with a random selected source domain for both a naive and adaptive learning setting.

1 code implementation • 20 Jul 2018 • Julius von Kügelgen, Alexander Mey, Marco Loog

Current methods for covariate-shift adaptation use unlabelled data to compute importance weights or domain-invariant features, while the final model is trained on labelled data only.

1 code implementation • 21 Jun 2018 • Wouter M. Kouw, Marco Loog

In practice, the data distribution at test time often differs, to a smaller or larger extent, from that of the original training data.

1 code implementation • 17 May 2018 • Yazhou Yang, Marco Loog

These pseudo annotators always provide uniform and random labels whenever new unlabeled samples are queried.

1 code implementation • 19 Apr 2018 • Wouter M. Kouw, Marco Loog

For sample selection bias settings, and for small sample sizes, the importance-weighted risk estimator produces overestimates for datasets in the body of the sampling distribution, i. e. the majority of cases, and large underestimates for data sets in the tail of the sampling distribution.

no code implementations • 25 Oct 2017 • Marco Loog

The original problem of supervised classification considers the task of automatically assigning objects to their respective classes on the basis of numerical measurements derived from these objects.

1 code implementation • 17 Oct 2017 • Wouter M. Kouw, Jesse H. Krijthe, Marco Loog

Cross-validation under sample selection bias can, in principle, be done by importance-weighting the empirical risk.

1 code implementation • 22 Sep 2017 • Wouter M. Kouw, Marco Loog, Lambertus W. Bartels, Adriënne M. Mendrik

Due to this acquisition related variation, classifiers trained on data from a specific scanner fail or under-perform when applied to data that was acquired differently.

no code implementations • 19 Jul 2017 • Amogh Gudi, Nicolai van Rosmalen, Marco Loog, Jan van Gemert

To facilitate this, we propose a novel global pooling technique called Spatial Pyramid Averaged Max (SPAM) pooling for training this CAM-based network for object extent localisation with only weak image-level supervision.

no code implementations • 13 Jul 2017 • Marco Loog, Jesse H. Krijthe, Are C. Jensen

In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies.

no code implementations • 10 Jul 2017 • Marco Loog, François Lauze

We start out by demonstrating that an elementary learning task, corresponding to the training of a single linear neuron in a convolutional neural network, can be solved for feature spaces of very high dimensionality.

1 code implementation • 25 Jun 2017 • Wouter M. Kouw, Marco Loog

In domain adaptation, classifiers with information from a source domain adapt to generalize to a target domain.

1 code implementation • 23 Jun 2017 • Yazhou Yang, Marco Loog

We propose a novel approach which we refer to as maximizing variance for active learning or MVAL for short.

no code implementations • 8 Jun 2017 • Tom J. Viering, Jesse H. Krijthe, Marco Loog

In particular we show the relation between the bound of the state-of-the-art Maximum Mean Discrepancy (MMD) active learner, the bound of the Discrepancy, and a new and looser bound that we refer to as the Nuclear Discrepancy bound.

no code implementations • 15 Mar 2017 • Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Jesper Holst Pedersen, Marco Loog, Marleen de Bruijne

Chronic obstructive pulmonary disease (COPD) is a lung disease where early detection benefits the survival rate.

no code implementations • 15 Mar 2017 • Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Marleen de Bruijne, Marco Loog

We address the problem of \emph{instance label stability} in multiple instance learning (MIL) classifiers.

1 code implementation • 27 Feb 2017 • Yazhou Yang, Marco Loog

Many active learning methods belong to the retraining-based approaches, which select one unlabeled instance, add it to the training set with its possible labels, retrain the classification model, and evaluate the criteria that we base our selection on.

no code implementations • NeurIPS 2018 • Jesse H. Krijthe, Marco Loog

Consider a classification problem where we have both labeled and unlabeled data available.

no code implementations • 27 Dec 2016 • Jesse H. Krijthe, Marco Loog

In this paper, we discuss the approaches we took and trade-offs involved in making a paper on a conceptual topic in pattern recognition research fully reproducible.

no code implementations • 25 Nov 2016 • Yazhou Yang, Marco Loog

Logistic regression is by far the most widely used classifier in real-world applications.

no code implementations • 17 Oct 2016 • Jesse H. Krijthe, Marco Loog

For the supervised least squares classifier, when the number of training objects is smaller than the dimensionality of the data, adding more data to the training set may first increase the error rate before decreasing it.

no code implementations • 12 Oct 2016 • Jesse H. Krijthe, Marco Loog

The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples.

1 code implementation • 31 Jul 2016 • Wouter M. Kouw, Marco Loog

This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting.

no code implementations • 10 Mar 2016 • Erik J. Bekkers, Marco Loog, Bart M. ter Haar Romeny, Remco Duits

We propose a template matching method for the detection of 2D image objects that are characterized by orientation patterns.

no code implementations • 25 Feb 2016 • Jesse H. Krijthe, Marco Loog

For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts.

no code implementations • 27 Dec 2015 • Jesse H. Krijthe, Marco Loog

Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier, as well as in terms of the expected classification error.

no code implementations • 15 Dec 2015 • Wouter M. Kouw, Jesse H. Krijthe, Marco Loog, Laurens J. P. van der Maaten

Our empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain.

no code implementations • 24 Jul 2015 • Jesse H. Krijthe, Marco Loog

We introduce a novel semi-supervised version of the least squares classifier.

1 code implementation • 1 Mar 2015 • Marco Loog

The latter refers to the fact that our estimates are conservative and therefore resilient to whatever form the true labeling of the unlabeled data takes on.

no code implementations • 17 Nov 2014 • Jesse H. Krijthe, Marco Loog

Using any one of these methods is not guaranteed to outperform the supervised classifier which does not take the additional unlabeled data into account.

no code implementations • 2 Jun 2014 • Veronika Cheplygina, David M. J. Tax, Marco Loog

To better deal with such problems, several extensions of supervised learning have been proposed, where either training and/or test objects are sets of feature vectors.

no code implementations • 6 Feb 2014 • Veronika Cheplygina, David M. J. Tax, Marco Loog

In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors.

no code implementations • 6 Feb 2014 • David M. J. Tax, Veronika Cheplygina, Marco Loog

Considering one whole slide as a collection (a bag) of feature vectors, however, poses the problem of how to handle this bag.

no code implementations • 22 Sep 2013 • Veronika Cheplygina, David M. J. Tax, Marco Loog

Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.