How does Disagreement Help Generalization against Label Corruption?

14 Jan 2019  ·  Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama ·

Learning with noisy labels is one of the hottest problems in weakly-supervised learning. Based on memorization effects of deep neural networks, training on small-loss instances becomes very promising for handling noisy labels. This fosters the state-of-the-art approach "Co-teaching" that cross-trains two deep neural networks using the small-loss trick. However, with the increase of epochs, two networks converge to a consensus and Co-teaching reduces to the self-training MentorNet. To tackle this issue, we propose a robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-teaching. First, two networks feed forward and predict all data, but keep prediction disagreement data only. Then, among such disagreement data, each network selects its small-loss data, but back propagates the small-loss data from its peer network and updates its own parameters. Empirical results on benchmark datasets demonstrate that Co-teaching+ is much superior to many state-of-the-art methods in the robustness of trained models.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N Co-Teaching+ Accuracy (mean) 57.88 # 13
Learning with noisy labels CIFAR-10N-Aggregate Co-Teaching+ Accuracy (mean) 90.61 # 19
Learning with noisy labels CIFAR-10N-Random1 Co-Teaching+ Accuracy (mean) 89.70 # 15
Learning with noisy labels CIFAR-10N-Random2 Co-Teaching+ Accuracy (mean) 89.47 # 14
Learning with noisy labels CIFAR-10N-Random3 Co-Teaching+ Accuracy (mean) 89.54 # 15
Learning with noisy labels CIFAR-10N-Worst Co-Teaching+ Accuracy (mean) 83.26 # 14

Methods


No methods listed for this paper. Add relevant methods here