Combining Self-Supervised and Supervised Learning with Noisy Labels

16 Nov 2020  ·  Yongqi Zhang, HUI ZHANG, Quanming Yao, Jun Wan ·

Since convolutional neural networks (CNNs) can easily overfit noisy labels, which are ubiquitous in visual classification tasks, it has been a great challenge to train CNNs against them robustly. Various methods have been proposed for this challenge. However, none of them pay attention to the difference between representation and classifier learning of CNNs. Thus, inspired by the observation that classifier is more robust to noisy labels while representation is much more fragile, and by the recent advances of self-supervised representation learning (SSRL) technologies, we design a new method, i.e., CS$^3$NL, to obtain representation by SSRL without labels and train the classifier directly with noisy labels. Extensive experiments are performed on both synthetic and real benchmark datasets. Results demonstrate that the proposed method can beat the state-of-the-art ones by a large margin, especially under a high noisy level.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here