Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels

ICML 2020  ·  Lu Jiang, Di Huang, Mason Liu, Weilong Yang ·

Performing controlled experiments on noisy data is essential in understanding deep learning across noise levels. Due to the lack of suitable datasets, previous research has only examined deep learning on controlled synthetic label noise, and real-world label noise has never been studied in a controlled setting. This paper makes three contributions. First, we establish the first benchmark of controlled real-world label noise from the web. This new benchmark enables us to study the web label noise in a controlled setting for the first time. The second contribution is a simple but effective method to overcome both synthetic and real noisy labels. We show that our method achieves the best result on our dataset as well as on two public benchmarks (CIFAR and WebVision). Third, we conduct the largest study by far into understanding deep neural networks trained on noisy labels across different noise levels, noise types, network architectures, and training settings. The data and code are released at the following link: http://www.lujiang.info/cnlw.html

PDF Abstract ICML 2020 PDF
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification mini WebVision 1.0 MentorMix (Inception-ResNet-v2) Top-1 Accuracy 76.0 # 33
Top-5 Accuracy 90.2 # 26
ImageNet Top-1 Accuracy 72.9 # 24
ImageNet Top-5 Accuracy 91.1 # 23
Image Classification WebVision-1000 MentorMix (InceptionResNet-V2) Top-1 Accuracy 74.3% # 12
Top-5 Accuracy 90.5% # 7
ImageNet Top-1 Accuracy 67.5% # 5
ImageNet Top-5 Accuracy 87.2% # 3

Methods


No methods listed for this paper. Add relevant methods here