Search Results for author: Paul Albert

Found 16 papers, 10 papers with code

Energy-Efficient Uncertainty-Aware Biomass Composition Prediction at the Edge

no code implementations17 Apr 2024 Muhammad Zawish, Paul Albert, Flavio Esposito, Steven Davy, Lizy Abraham

We report that although pruned networks are accurate on controlled, high-quality images of the grass, they struggle to generalize to real-world smartphone images that are blurry or taken from challenging angles.

Is your noise correction noisy? PLS: Robustness to label noise with two stage detection

2 code implementations10 Oct 2022 Paul Albert, Eric Arazo, Tarun Krishna, Noel E. O'Connor, Kevin McGuinness

Experiments demonstrate the state-of-the-art performance of our Pseudo-Loss Selection (PLS) algorithm on a variety of benchmark datasets including curated data synthetically corrupted with in-distribution and out-of-distribution noise, and two real world web noise datasets.

Pseudo Label

Embedding contrastive unsupervised features to cluster in- and out-of-distribution noise in corrupted image datasets

1 code implementation4 Jul 2022 Paul Albert, Eric Arazo, Noel E. O'Connor, Kevin McGuinness

These noisy samples have been evidenced by previous works to be a mixture of in-distribution (ID) samples, assigned to the incorrect category but presenting similar visual semantics to other classes in the dataset, and out-of-distribution (OOD) images, which share no semantic correlation with any category from the dataset.

Clustering Contrastive Learning +2

Unsupervised domain adaptation and super resolution on drone images for autonomous dry herbage biomass estimation

1 code implementation18 Apr 2022 Paul Albert, Mohamed Saadeldin, Badri Narayanan, Jaime Fernandez, Brian Mac Namee, Deirdre Hennessey, Noel E. O'Connor, Kevin McGuinness

In this context, deep learning algorithms offer a tempting alternative to the usual means of sward composition estimation, which involves the destructive process of cutting a sample from the herbage field and sorting by hand all plant species in the herbage.

Super-Resolution Unsupervised Domain Adaptation

How Important is Importance Sampling for Deep Budgeted Training?

1 code implementation27 Oct 2021 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

We suggest that, given a specific budget, the best course of action is to disregard the importance and introduce adequate data augmentation; e. g. when reducing the budget to a 30% in CIFAR-10/100, RICAP data augmentation maintains accuracy, while importance sampling does not.

Data Augmentation

Semi-supervised dry herbage mass estimation using automatic data and synthetic images

no code implementations26 Oct 2021 Paul Albert, Mohamed Saadeldin, Badri Narayanan, Brian Mac Namee, Deirdre Hennessy, Aisling O'Connor, Noel O'Connor, Kevin McGuinness

Deep learning for computer vision is a powerful tool in this context as it can accurately estimate the dry biomass of a herbage parcel using images of the grass canopy taken using a portable device.

Semantic Segmentation Synthetic Data Generation

Addressing out-of-distribution label noise in webly-labelled data

no code implementations26 Oct 2021 Paul Albert, Diego Ortego, Eric Arazo, Noel O'Connor, Kevin McGuinness

We propose a simple solution to bridge the gap with a fully clean dataset using Dynamic Softening of Out-of-distribution Samples (DSOS), which we design on corrupted versions of the CIFAR-100 dataset, and compare against state-of-the-art algorithms on the web noise perturbated MiniImageNet and Stanford datasets and on real label noise datasets: WebVision 1. 0 and Clothing1M.

Image Classification

Extracting Pasture Phenotype and Biomass Percentages using Weakly Supervised Multi-target Deep Learning on a Small Dataset

no code implementations8 Jan 2021 Badri Narayanan, Mohamed Saadeldin, Paul Albert, Kevin McGuinness, Brian Mac Namee

In this paper, we demonstrate that applying data augmentation and transfer learning is effective in predicting multi-target biomass percentages of different plant species, even with a small training dataset.

Data Augmentation Transfer Learning

The Importance of Importance Sampling for Deep Budgeted Training

no code implementations1 Jan 2021 Eric Arazo, Diego Ortego, Paul Albert, Noel O'Connor, Kevin McGuinness

For example, training in CIFAR-10/100 with 30% of the full training budget, a uniform sampling strategy with certain data augmentation surpasses the performance of 100% budget models trained with standard data augmentation.

Data Augmentation

Multi-Objective Interpolation Training for Robustness to Label Noise

1 code implementation CVPR 2021 Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness

We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples.

Contrastive Learning Image Classification +3

Reliable Label Bootstrapping for Semi-Supervised Learning

1 code implementation23 Jul 2020 Paul Albert, Diego Ortego, Eric Arazo, Noel E. O'Connor, Kevin McGuinness

We propose Reliable Label Bootstrapping (ReLaB), an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.

Self-Supervised Learning

Towards Robust Learning with Different Label Noise Distributions

1 code implementation18 Dec 2019 Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness

However, we show that different noise distributions make the application of this trick less straightforward and propose to continuously relabel all images to reveal a discriminative loss against multiple distributions.

Memorization Representation Learning

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning

4 code implementations8 Aug 2019 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.

Image Classification

Unsupervised Label Noise Modeling and Loss Correction

2 code implementations25 Apr 2019 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss).

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.