Learning with noisy labels

119 papers with code • 18 benchmarks • 13 datasets

Learning with noisy labels means When we say "noisy labels," we mean that an adversary has intentionally messed up the labels, which would have come from a "clean" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.

Libraries

Use these libraries to find Learning with noisy labels models and implementations

Most implemented papers

Dimensionality-Driven Learning with Noisy Labels

xingjunm/dimensionality-driven-learning ICML 2018

Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs).

L_DMI: An Information-theoretic Noise-robust Loss Function

Newbeeer/L_DMI 8 Sep 2019

\emph{To the best of our knowledge, $\mathcal{L}_{DMI}$ is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information}.

Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates

weijiaheng/Multi-class-Peer-Loss-functions ICML 2020

In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates.

DivideMix: Learning with Noisy Labels as Semi-supervised Learning

LiJunnan1992/DivideMix ICLR 2020

Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data.

Combating noisy labels by agreement: A joint training method with co-regularization

hongxin001/JoCoR CVPR 2020

The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.

Early-Learning Regularization Prevents Memorization of Noisy Labels

shengliu66/ELR NeurIPS 2020

In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization.

When Optimizing $f$-divergence is Robust with Label Noise

weijiaheng/Robust-f-divergence-measures ICLR 2021

We show when maximizing a properly defined $f$-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise.

Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels

UCSC-REAL/HOC 10 Feb 2021

Nonetheless, finding anchor points remains a non-trivial task, and the estimation accuracy is also often throttled by the number of available anchor points.

Co-Correcting: Noise-tolerant Medical Image Classification via mutual Label Correction

jiarunliu/co-correcting 11 Sep 2021

With the development of deep learning, medical image classification has been significantly improved.

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

ucsc-real/cifar-10-100n ICLR 2022

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.