Training Noise-Robust Deep Neural Networks via Meta-Learning

CVPR 2020  ·  Zhen Wang, Guosheng Hu, Qinghua Hu ·

Label noise may significantly degrade the performance of Deep Neural Networks (DNNs). To train noise-robust DNNs, Loss correction (LC) approaches have been introduced. LC approaches assume the noisy labels are corrupted from clean (ground-truth) labels by an unknown noise transition matrix T. The backbone DNNs and T can be trained separately, where T is approximated with prior knowledge. For example, T is constructed by stacking the maximum or mean predic- tions of the samples from each class. In this work, we pro- pose a new loss correction approach, named as Meta Loss Correction (MLC), to directly learn T from data via the meta-learning framework. The MLC is model-agnostic and learns T from data rather than heuristically approximates it using prior knowledge. Extensive evaluations are conducted on computer vision (MNIST, CIFAR-10, CIFAR-100, Cloth- ing1M) and natural language processing (Twitter) datasets. The experimental results show that MLC achieves very com- petitive performance against state-of-the-art approaches.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here