Search Results for author: Diego Ortego

Found 14 papers, 9 papers with code

LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias

1 code implementation6 Jul 2023 Mario Almagro, Emilio Almazán, Diego Ortego, David Jiménez

We show that this is also the case for sentence similarity, a fundamental task in multiple domains, e. g. matching, retrieval or paraphrasing.

Data Augmentation Natural Language Inference +2

Domain Randomization for Object Counting

1 code implementation17 Feb 2022 Enric Moreu, Kevin McGuinness, Diego Ortego, Noel E. O'Connor

We introduce a domain randomization approach for object counting based on synthetic datasets that are quick and inexpensive to generate.

Object Object Counting

How Important is Importance Sampling for Deep Budgeted Training?

1 code implementation27 Oct 2021 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

We suggest that, given a specific budget, the best course of action is to disregard the importance and introduce adequate data augmentation; e. g. when reducing the budget to a 30% in CIFAR-10/100, RICAP data augmentation maintains accuracy, while importance sampling does not.

Data Augmentation

Addressing out-of-distribution label noise in webly-labelled data

no code implementations26 Oct 2021 Paul Albert, Diego Ortego, Eric Arazo, Noel O'Connor, Kevin McGuinness

We propose a simple solution to bridge the gap with a fully clean dataset using Dynamic Softening of Out-of-distribution Samples (DSOS), which we design on corrupted versions of the CIFAR-100 dataset, and compare against state-of-the-art algorithms on the web noise perturbated MiniImageNet and Stanford datasets and on real label noise datasets: WebVision 1. 0 and Clothing1M.

Image Classification

The Importance of Importance Sampling for Deep Budgeted Training

no code implementations1 Jan 2021 Eric Arazo, Diego Ortego, Paul Albert, Noel O'Connor, Kevin McGuinness

For example, training in CIFAR-10/100 with 30% of the full training budget, a uniform sampling strategy with certain data augmentation surpasses the performance of 100% budget models trained with standard data augmentation.

Data Augmentation

Multi-Objective Interpolation Training for Robustness to Label Noise

1 code implementation CVPR 2021 Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness

We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples.

Contrastive Learning Image Classification +3

Unsupervised Contrastive Learning of Sound Event Representations

1 code implementation15 Nov 2020 Eduardo Fonseca, Diego Ortego, Kevin McGuinness, Noel E. O'Connor, Xavier Serra

Self-supervised representation learning can mitigate the limitations in recognition tasks with few manually labeled data but abundant unlabeled data---a common scenario in sound event research.

Contrastive Learning Representation Learning

Reliable Label Bootstrapping for Semi-Supervised Learning

1 code implementation23 Jul 2020 Paul Albert, Diego Ortego, Eric Arazo, Noel E. O'Connor, Kevin McGuinness

We propose Reliable Label Bootstrapping (ReLaB), an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.

Self-Supervised Learning

Toward the Automatic Retrieval and Annotation of Outsider Art images: A Preliminary Statement

no code implementations LREC 2020 John Roberto, Diego Ortego, Brian Davis

The aim of this position paper is to establish an initial approach to the automatic classification of digital images about the Outsider Art style of painting.

General Classification Retrieval

Towards Robust Learning with Different Label Noise Distributions

1 code implementation18 Dec 2019 Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness

However, we show that different noise distributions make the application of this trick less straightforward and propose to continuously relabel all images to reveal a discriminative loss against multiple distributions.

Memorization Representation Learning

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning

4 code implementations8 Aug 2019 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.

Image Classification

Unsupervised Label Noise Modeling and Loss Correction

2 code implementations25 Apr 2019 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss).

Image Classification

On guiding video object segmentation

no code implementations25 Apr 2019 Diego Ortego, Kevin McGuinness, Juan C. SanMiguel, Eric Arazo, José M. Martínez, Noel E. O'Connor

This guiding process relies on foreground masks from independent algorithms (i. e. state-of-the-art algorithms) to implement an attention mechanism that incorporates the spatial location of foreground and background to compute their separated representations.

Foreground Segmentation Object +5

Cannot find the paper you are looking for? You can Submit a new open access paper.