Learning with noisy labels

139 papers with code • 20 benchmarks • 16 datasets

Learning with noisy labels means When we say "noisy labels," we mean that an adversary has intentionally messed up the labels, which would have come from a "clean" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.

Libraries

Use these libraries to find Learning with noisy labels models and implementations

Most implemented papers

Sharpness-Aware Minimization for Efficiently Improving Generalization

google-research/sam ICLR 2021

In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability.

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

bhanML/Co-teaching NeurIPS 2018

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

AlanChou/Truncated-Loss NeurIPS 2018

Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE.

Symmetric Cross Entropy for Robust Learning with Noisy Labels

YisenWang/symmetric_cross_entropy_for_noisy_labels ICCV 2019

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Confident Learning: Estimating Uncertainty in Dataset Labels

cleanlab/cleanlab 31 Oct 2019

Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence.

Normalized Loss Functions for Deep Learning with Noisy Labels

HanxunH/Active-Passive-Losses ICML 2020

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise

hongxin001/ODNL NeurIPS 2021

Learning with noisy labels is a practically challenging problem in weakly supervised learning.

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

ucsc-real/cifar-10-100n ICLR 2022

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

Protoformer: Embedding Prototypes for Transformers

ashfarhangi/Protoformer PAKDD 2022: Advances in Knowledge Discovery and Data Mining 2022

This paper proposes Protoformer, a novel self-learning framework for Transformers that can leverage problematic samples for text classification.

How does Disagreement Help Generalization against Label Corruption?

xingruiyu/coteaching_plus 14 Jan 2019

Learning with noisy labels is one of the hottest problems in weakly-supervised learning.