Learning with Feature-Dependent Label Noise: A Progressive Approach

Label noise is frequently observed in real-world large-scale datasets. The noise is introduced due to a variety of reasons; it is heterogeneous and feature-dependent. Most existing approaches to handling noisy labels fall into two categories: they either assume an ideal feature-independent noise, or remain heuristic without theoretical guarantees. In this paper, we propose to target a new family of feature-dependent label noise, which is much more general than commonly used i.i.d. label noise and encompasses a broad spectrum of noise patterns. Focusing on this general noise family, we propose a progressive label correction algorithm that iteratively corrects labels and refines the model. We provide theoretical guarantees showing that for a wide variety of (unknown) noise patterns, a classifier trained with this strategy converges to be consistent with the Bayes classifier. In experiments, our method outperforms SOTA baselines and is robust to various noise types and levels.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels ANIMAL PLC Accuracy 83.4 # 12
Network Vgg19-BN # 1
ImageNet Pretrained NO # 1
Learning with noisy labels ANIMAL Cross Entropy Accuracy 79.4 # 17
Network Vgg19-BN # 1
ImageNet Pretrained NO # 1

Methods


No methods listed for this paper. Add relevant methods here