Learning with noisy labels
117 papers with code • 18 benchmarks • 13 datasets
Learning with noisy labels means When we say "noisy labels," we mean that an adversary has intentionally messed up the labels, which would have come from a "clean" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.
LibrariesUse these libraries to find Learning with noisy labels models and implementations
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability.
Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.
Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE.
In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).
Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence.
Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels.
We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise.