Early-Learning Regularization Prevents Memorization of Noisy Labels

We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach. First, we leverage semi-supervised learning techniques to produce target probabilities based on the model outputs. Second, we design a regularization term that steers the model towards these targets, implicitly preventing memorization of the false labels. The resulting framework is shown to provide robustness to noisy annotations on several standard benchmarks and real-world datasets, where it achieves results comparable to the state of the art.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Learning with noisy labels CIFAR-100N ELR+ Accuracy (mean) 66.72 # 5
Learning with noisy labels CIFAR-100N ELR Accuracy (mean) 58.94 # 11
Learning with noisy labels CIFAR-10N-Aggregate ELR Accuracy (mean) 92.38 # 10
Learning with noisy labels CIFAR-10N-Aggregate ELR+ Accuracy (mean) 94.83 # 7
Learning with noisy labels CIFAR-10N-Random1 ELR+ Accuracy (mean) 94.43 # 6
Learning with noisy labels CIFAR-10N-Random1 ELR Accuracy (mean) 91.46 # 8
Learning with noisy labels CIFAR-10N-Random2 ELR+ Accuracy (mean) 94.20 # 4
Learning with noisy labels CIFAR-10N-Random2 ELR Accuracy (mean) 91.61 # 5
Learning with noisy labels CIFAR-10N-Random3 ELR Accuracy (mean) 91.41 # 6
Learning with noisy labels CIFAR-10N-Random3 ELR+ Accuracy (mean) 94.34 # 4
Learning with noisy labels CIFAR-10N-Worst ELR+ Accuracy (mean) 91.09 # 7
Learning with noisy labels CIFAR-10N-Worst ELR Accuracy (mean) 83.58 # 12
Image Classification Clothing1M ELR+ Accuracy 74.81% # 13
Image Classification mini WebVision 1.0 ELR+ (Inception-ResNet-v2) Top-1 Accuracy 77.78 # 23
Top-5 Accuracy 91.68 # 20
ImageNet Top-1 Accuracy 70.29 # 26
ImageNet Top-5 Accuracy 89.76 # 25

Methods