Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training. Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels. Therefore in this paper, we propose a new deep learning paradigm called Co-teaching for combating with noisy labels. Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself. Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.

PDF Abstract NeurIPS 2018 PDF NeurIPS 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N Co-Teaching Accuracy (mean) 60.37 # 9
Learning with noisy labels CIFAR-10N-Aggregate Co-Teaching Accuracy (mean) 91.20 # 17
Learning with noisy labels CIFAR-10N-Random1 Co-Teaching Accuracy (mean) 90.33 # 10
Learning with noisy labels CIFAR-10N-Random2 Co-Teaching Accuracy (mean) 90.30 # 10
Learning with noisy labels CIFAR-10N-Random3 Co-Teaching Accuracy (mean) 90.15 # 8
Learning with noisy labels CIFAR-10N-Worst Co-Teaching Accuracy (mean) 83.83 # 10
Image Classification Clothing1M CoT Accuracy 70.15% # 47

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Image Classification mini WebVision 1.0 Co-teaching (Inception-ResNet-v2) Top-1 Accuracy 63.58 # 38
Top-5 Accuracy 85.20 # 29
ImageNet Top-1 Accuracy 61.48 # 34
ImageNet Top-5 Accuracy 84.70 # 30

Methods


No methods listed for this paper. Add relevant methods here