Coresets for Robust Training of Neural Networks against Noisy Labels

15 Nov 2020  ·  Baharan Mirzasoleiman, Kaidi Cao, Jure Leskovec ·

Modern neural networks have the capacity to overfit noisy labels frequently found in real-world datasets. Although great progress has been made, existing techniques are limited in providing theoretical guarantees for the performance of the neural networks trained with noisy labels. Here we propose a novel approach with strong theoretical guarantees for robust training of deep networks trained with noisy labels. The key idea behind our method is to select weighted subsets (coresets) of clean data points that provide an approximately low-rank Jacobian matrix. We then prove that gradient descent applied to the subsets do not overfit the noisy labels. Our extensive experiments corroborate our theory and demonstrate that deep networks trained on our subsets achieve a significantly superior performance compared to state-of-the art, e.g., 6% increase in accuracy on CIFAR-10 with 80% noisy labels, and 7% increase in accuracy on mini Webvision.

PDF Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification mini WebVision 1.0 Crust (Inception-ResNet-v2) Top-1 Accuracy 72.40 # 36
Top-5 Accuracy 89.56 # 27
ImageNet Top-1 Accuracy 67.36 # 27
ImageNet Top-5 Accuracy 87.84 # 26

Methods


No methods listed for this paper. Add relevant methods here