Search Results for author: Marco Loog

Found 52 papers, 19 papers with code

Social Processes: Self-Supervised Forecasting of Nonverbal Cues in Social Conversations

1 code implementation28 Jul 2021 Chirag Raman, Hayley Hung, Marco Loog

In this work, we take the first step in the direction of a bottom-up self-supervised approach in the domain.

Social Cue Forecasting

Resolution learning in deep convolutional networks using scale-space theory

no code implementations7 Jun 2021 Silvia L. Pintea, Nergis Tomen, Stanley F. Goes, Marco Loog, Jan C. van Gemert

We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters.

The Shape of Learning Curves: a Review

1 code implementation19 Mar 2021 Tom Viering, Marco Loog

This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning.

Gaussian Processes Model Selection

Nearest Neighbor-based Importance Weighting

no code implementations3 Feb 2021 Marco Loog

Importance weighting is widely applicable in machine learning in general and in techniques dealing with data covariate shift problems in particular.

Classification General Classification

Respecting Domain Relations: Hypothesis Invariance for Domain Generalization

no code implementations15 Oct 2020 Ziqi Wang, Marco Loog, Jan van Gemert

In this work, we define DIRs employed by existing works in probabilistic terms and show that by learning DIRs, overly strict requirements are imposed concerning the invariance.

Domain Generalization

Black Magic in Deep Learning: How Human Skill Impacts Network Training

1 code implementation13 Aug 2020 Kanav Anand, Ziqi Wang, Marco Loog, Jan van Gemert

Our study investigates the subjective human factor in comparisons of state of the art results and scientific reproducibility in deep learning.

Hyperparameter Optimization

A Brief Prehistory of Double Descent

no code implementations7 Apr 2020 Marco Loog, Tom Viering, Alexander Mey, Jesse H. Krijthe, David M. J. Tax

In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners.

Making Learners (More) Monotone

no code implementations25 Nov 2019 Tom J. Viering, Alexander Mey, Marco Loog

Learning performance can show non-monotonic behavior.

Consistency and Finite Sample Behavior of Binary Class Probability Estimation

no code implementations30 Aug 2019 Alexander Mey, Marco Loog

Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions.

Improvability Through Semi-Supervised Learning: A Survey of Theoretical Results

no code implementations26 Aug 2019 Alexander Mey, Marco Loog

In this review we gather results about the possible gains one can achieve when using semi-supervised learning as well as results about the limits of such methods.

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

no code implementations25 Jul 2019 Tom Viering, Ziqi Wang, Marco Loog, Elmar Eisemann

This illustrates that GradCAM cannot explain the decision of every CNN and provides a proof of concept showing that it is possible to obfuscate the inner workings of a CNN.

Minimizers of the Empirical Risk and Risk Monotonicity

1 code implementation NeurIPS 2019 Marco Loog, Tom Viering, Alexander Mey

Plotting a learner's average performance against the number of training samples results in a learning curve.

Density Estimation

A Distribution Dependent and Independent Complexity Analysis of Manifold Regularization

no code implementations14 Jun 2019 Alexander Mey, Tom Viering, Marco Loog

Here, we derive sample complexity bounds based on pseudo-dimension for models that add a convex data dependent regularization term to a supervised learning process, as is in particular done in Manifold regularization.

General Classification

Semi-Supervised Learning, Causality and the Conditional Cluster Assumption

1 code implementation28 May 2019 Julius von Kügelgen, Alexander Mey, Marco Loog, Bernhard Schölkopf

While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms.

A review of domain adaptation without target labels

1 code implementation16 Jan 2019 Wouter M. Kouw, Marco Loog

Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure.

Domain Generalization Unsupervised Domain Adaptation

An introduction to domain adaptation and transfer learning

no code implementations31 Dec 2018 Wouter M. Kouw, Marco Loog

Domain adaptation and transfer learning are sub-fields within machine learning that are concerned with accounting for these types of changes.

Domain Adaptation Transfer Learning

Learning an MR acquisition-invariant representation using Siamese neural networks

1 code implementation17 Oct 2018 Wouter M. Kouw, Marco Loog, Wilbert Bartels, Adriënne M. Mendrik

Generalization of voxelwise classifiers is hampered by differences between MRI-scanners, e. g. different acquisition protocols and field strengths.

Distance Based Source Domain Selection for Sentiment Classification

no code implementations28 Aug 2018 Lex Razoux Schultz, Marco Loog, Peyman Mohajerin Esfahani

The performance of the proposed methodology is validated through an SC case study in which our numerical experiments suggest a significant improvement in the cross domain classification error in comparison with a random selected source domain for both a naive and adaptive learning setting.

Classification General Classification +1

Semi-Generative Modelling: Covariate-Shift Adaptation with Cause and Effect Features

1 code implementation20 Jul 2018 Julius von Kügelgen, Alexander Mey, Marco Loog

Current methods for covariate-shift adaptation use unlabelled data to compute importance weights or domain-invariant features, while the final model is trained on labelled data only.

Domain Adaptation

Target Robust Discriminant Analysis

1 code implementation21 Jun 2018 Wouter M. Kouw, Marco Loog

In practice, the data distribution at test time often differs, to a smaller or larger extent, from that of the original training data.

Single Shot Active Learning using Pseudo Annotators

1 code implementation17 May 2018 Yazhou Yang, Marco Loog

These pseudo annotators always provide uniform and random labels whenever new unlabeled samples are queried.

Active Learning

Effects of sampling skewness of the importance-weighted risk estimator on model selection

1 code implementation19 Apr 2018 Wouter M. Kouw, Marco Loog

For sample selection bias settings, and for small sample sizes, the importance-weighted risk estimator produces overestimates for datasets in the body of the sampling distribution, i. e. the majority of cases, and large underestimates for data sets in the tail of the sampling distribution.

Model Selection Selection bias

Supervised Classification: Quite a Brief Overview

no code implementations25 Oct 2017 Marco Loog

The original problem of supervised classification considers the task of automatically assigning objects to their respective classes on the basis of numerical measurements derived from these objects.

Classification General Classification

Robust importance-weighted cross-validation under sample selection bias

1 code implementation17 Oct 2017 Wouter M. Kouw, Jesse H. Krijthe, Marco Loog

Cross-validation under sample selection bias can, in principle, be done by importance-weighting the empirical risk.

General Classification Selection bias

MR Acquisition-Invariant Representation Learning

1 code implementation22 Sep 2017 Wouter M. Kouw, Marco Loog, Lambertus W. Bartels, Adriënne M. Mendrik

Due to this acquisition related variation, classifiers trained on data from a specific scanner fail or under-perform when applied to data that was acquired differently.

Classification General Classification +1

Object-Extent Pooling for Weakly Supervised Single-Shot Localization

no code implementations19 Jul 2017 Amogh Gudi, Nicolai van Rosmalen, Marco Loog, Jan van Gemert

To facilitate this, we propose a novel global pooling technique called Spatial Pyramid Averaged Max (SPAM) pooling for training this CAM-based network for object extent localisation with only weak image-level supervision.

Region Proposal Weakly-Supervised Object Localization

On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL

no code implementations13 Jul 2017 Marco Loog, Jesse H. Krijthe, Are C. Jensen

In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies.

Active Learning Domain Adaptation +1

Scale-Regularized Filter Learning

no code implementations10 Jul 2017 Marco Loog, François Lauze

We start out by demonstrating that an elementary learning task, corresponding to the training of a single linear neuron in a convolutional neural network, can be solved for feature spaces of very high dimensionality.

Target contrastive pessimistic risk for robust domain adaptation

1 code implementation25 Jun 2017 Wouter M. Kouw, Marco Loog

In domain adaptation, classifiers with information from a source domain adapt to generalize to a target domain.

Domain Adaptation Selection bias

A Variance Maximization Criterion for Active Learning

1 code implementation23 Jun 2017 Yazhou Yang, Marco Loog

We propose a novel approach which we refer to as maximizing variance for active learning or MVAL for short.

Active Learning

Nuclear Discrepancy for Active Learning

no code implementations8 Jun 2017 Tom J. Viering, Jesse H. Krijthe, Marco Loog

In particular we show the relation between the bound of the state-of-the-art Maximum Mean Discrepancy (MMD) active learner, the bound of the Discrepancy, and a new and looser bound that we refer to as the Nuclear Discrepancy bound.

Active Learning Generalization Bounds

Label Stability in Multiple Instance Learning

no code implementations15 Mar 2017 Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Marleen de Bruijne, Marco Loog

We address the problem of \emph{instance label stability} in multiple instance learning (MIL) classifiers.

Multiple Instance Learning

Active Learning Using Uncertainty Information

1 code implementation27 Feb 2017 Yazhou Yang, Marco Loog

Many active learning methods belong to the retraining-based approaches, which select one unlabeled instance, add it to the training set with its possible labels, retrain the classification model, and evaluate the criteria that we base our selection on.

Active Learning

Reproducible Pattern Recognition Research: The Case of Optimistic SSL

no code implementations27 Dec 2016 Jesse H. Krijthe, Marco Loog

In this paper, we discuss the approaches we took and trade-offs involved in making a paper on a conceptual topic in pattern recognition research fully reproducible.

The Peaking Phenomenon in Semi-supervised Learning

no code implementations17 Oct 2016 Jesse H. Krijthe, Marco Loog

For the supervised least squares classifier, when the number of training objects is smaller than the dimensionality of the data, adding more data to the training set may first increase the error rate before decreasing it.

Optimistic Semi-supervised Least Squares Classification

no code implementations12 Oct 2016 Jesse H. Krijthe, Marco Loog

The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples.

Classification General Classification

On Regularization Parameter Estimation under Covariate Shift

1 code implementation31 Jul 2016 Wouter M. Kouw, Marco Loog

This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting.

Domain Adaptation L2 Regularization

Template Matching via Densities on the Roto-Translation Group

no code implementations10 Mar 2016 Erik J. Bekkers, Marco Loog, Bart M. ter Haar Romeny, Remco Duits

We propose a template matching method for the detection of 2D image objects that are characterized by orientation patterns.

Template Matching

Projected Estimators for Robust Semi-supervised Classification

no code implementations25 Feb 2016 Jesse H. Krijthe, Marco Loog

For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts.

Classification General Classification

Robust Semi-supervised Least Squares Classification by Implicit Constraints

no code implementations27 Dec 2015 Jesse H. Krijthe, Marco Loog

Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier, as well as in terms of the expected classification error.

Classification General Classification

Feature-Level Domain Adaptation

no code implementations15 Dec 2015 Wouter M. Kouw, Jesse H. Krijthe, Marco Loog, Laurens J. P. van der Maaten

Our empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain.

Domain Adaptation

Contrastive Pessimistic Likelihood Estimation for Semi-Supervised Classification

1 code implementation1 Mar 2015 Marco Loog

The latter refers to the fact that our estimates are conservative and therefore resilient to whatever form the true labeling of the unlabeled data takes on.

Classification General Classification

Implicitly Constrained Semi-Supervised Linear Discriminant Analysis

no code implementations17 Nov 2014 Jesse H. Krijthe, Marco Loog

Using any one of these methods is not guaranteed to outperform the supervised classifier which does not take the additional unlabeled data into account.

On Classification with Bags, Groups and Sets

no code implementations2 Jun 2014 Veronika Cheplygina, David M. J. Tax, Marco Loog

To better deal with such problems, several extensions of supervised learning have been proposed, where either training and/or test objects are sets of feature vectors.

Classification General Classification

Dissimilarity-based Ensembles for Multiple Instance Learning

no code implementations6 Feb 2014 Veronika Cheplygina, David M. J. Tax, Marco Loog

In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors.

Multiple Instance Learning

Quantile Representation for Indirect Immunofluorescence Image Classification

no code implementations6 Feb 2014 David M. J. Tax, Veronika Cheplygina, Marco Loog

Considering one whole slide as a collection (a bag) of feature vectors, however, poses the problem of how to handle this bag.

Classification General Classification +1

Multiple Instance Learning with Bag Dissimilarities

no code implementations22 Sep 2013 Veronika Cheplygina, David M. J. Tax, Marco Loog

Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous.

Multiple Instance Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.