Are Anchor Points Really Indispensable in Label-Noise Learning?

In label-noise learning, \textit{noise transition matrix}, denoting the probabilities that clean labels flip into noisy labels, plays a central role in building \textit{statistically consistent classifiers}. Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i.e., data points that belong to a specific class almost surely). However, when there are no anchor points, the transition matrix will be poorly learned, and those current consistent classifiers will significantly degenerate. In this paper, without employing anchor points, we propose a \textit{transition-revision} ($T$-Revision) method to effectively learn transition matrices, leading to better classifiers. Specifically, to learn a transition matrix, we first initialize it by exploiting data points that are similar to anchor points, having high \textit{noisy class posterior probabilities}. Then, we modify the initialized matrix by adding a \textit{slack variable}, which can be learned and validated together with the classifier by using noisy data. Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to the state-of-the-art label-noise learning methods.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N T-Revision Accuracy (mean) 51.55 # 23
Learning with noisy labels CIFAR-10N-Aggregate T-Revision Accuracy (mean) 88.52 # 21
Learning with noisy labels CIFAR-10N-Random1 T-Revision Accuracy (mean) 88.33 # 19
Learning with noisy labels CIFAR-10N-Random2 T-Revision Accuracy (mean) 87.71 # 18
Learning with noisy labels CIFAR-10N-Random3 T-Revision Accuracy (mean) 87.79 # 18
Learning with noisy labels CIFAR-10N-Worst T-Revision Accuracy (mean) 80.48 # 21

Methods


No methods listed for this paper. Add relevant methods here