Scalable Penalized Regression for Noise Detection in Learning with Noisy Labels

CVPR 2022  ·  Yikai Wang, Xinwei Sun, Yanwei Fu ·

Noisy training set usually leads to the degradation of generalization and robustness of neural networks. In this paper, we propose using a theoretically guaranteed noisy label detection framework to detect and remove noisy data for Learning with Noisy Labels (LNL). Specifically, we design a penalized regression to model the linear relation between network features and one-hot labels, where the noisy data are identified by the non-zero mean shift parameters solved in the regression model. To make the framework scalable to datasets that contain a large number of categories and training data, we propose a split algorithm to divide the whole training set into small pieces that can be solved by the penalized regression in parallel, leading to the Scalable Penalized Regression (SPR) framework. We provide the non-asymptotic probabilistic condition for SPR to correctly identify the noisy data. While SPR can be regarded as a sample selection module for standard supervised training pipeline, we further combine it with semi-supervised algorithm to further exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework. Our code and pretrained models are released at https://github.com/Yikai-Wang/SPR-LNL.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels ANIMAL SPR Accuracy 86.8 # 5
Network VGG19-BN # 1
ImageNet Pretrained NO # 1
Learning with noisy labels Clothing1M SPR Test Accuracy 71.16 # 4
Image Classification Clothing1M SPR Accuracy 71.16% # 41

Methods


No methods listed for this paper. Add relevant methods here