Learning with noisy labels
119 papers with code • 18 benchmarks • 13 datasets
Learning with noisy labels means When we say "noisy labels," we mean that an adversary has intentionally messed up the labels, which would have come from a "clean" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.
Libraries
Use these libraries to find Learning with noisy labels models and implementationsDatasets
Most implemented papers
Dimensionality-Driven Learning with Noisy Labels
Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs).
L_DMI: An Information-theoretic Noise-robust Loss Function
\emph{To the best of our knowledge, $\mathcal{L}_{DMI}$ is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information}.
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates.
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data.
Combating noisy labels by agreement: A joint training method with co-regularization
The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.
Early-Learning Regularization Prevents Memorization of Noisy Labels
In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization.
When Optimizing $f$-divergence is Robust with Label Noise
We show when maximizing a properly defined $f$-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise.
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
Nonetheless, finding anchor points remains a non-trivial task, and the estimation accuracy is also often throttled by the number of available anchor points.
Co-Correcting: Noise-tolerant Medical Image Classification via mutual Label Correction
With the development of deep learning, medical image classification has been significantly improved.
Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations
These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.