Combating noisy labels by agreement: A joint training method with co-regularization

CVPR 2020  ·  Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An ·

Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N JoCoR Accuracy (mean) 59.97 # 7
Learning with noisy labels CIFAR-10N-Aggregate JoCoR Accuracy (mean) 91.44 # 9
Learning with noisy labels CIFAR-10N-Random1 JoCoR Accuracy (mean) 90.30 # 7
Learning with noisy labels CIFAR-10N-Random2 JoCoR Accuracy (mean) 90.21 # 9
Learning with noisy labels CIFAR-10N-Random3 JoCoR Accuracy (mean) 90.11 # 8
Learning with noisy labels CIFAR-10N-Worst JoCoR Accuracy (mean) 83.37 # 9
Image Classification Clothing1M JoCoR Accuracy 70.3% # 37


No methods listed for this paper. Add relevant methods here