CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images

We present a simple yet efficient approach capable of training deep neural networks on large-scale weakly-supervised web images, which are crawled raw from the Internet by using text queries, without any human annotation. We develop a principled learning strategy by leveraging curriculum learning, with the goal of handling a massive amount of noisy labels and data imbalance effectively. We design a new learning curriculum by measuring the complexity of data using its distribution density in a feature space, and rank the complexity in an unsupervised manner. This allows for an efficient implementation of curriculum learning on large-scale web images, resulting in a high-performance CNN model, where the negative impact of noisy labels is reduced substantially. Importantly, we show by experiments that those images with highly noisy labels can surprisingly improve the generalization capability of the model, by serving as a manner of regularization. Our approaches obtain state-of-the-art performance on four benchmarks: WebVision, ImageNet, Clothing-1M and Food-101. With an ensemble of multiple models, we achieved a top-5 error rate of 5.2% on the WebVision challenge for 1000-category classification. This result was the top performance by a wide margin, outperforming second place by a nearly 50% relative error rate. Code and models are available at: .

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification Clothing1M (using clean data) CurriculumNet Accuracy 81.5% # 1
Image Classification WebVision-1000 CurriculumNet (InceptionResNet-v2) Top-1 Accuracy 79.3% # 1
Top-5 Accuracy 93.6% # 1
Image Classification WebVision-1000 CurriculumNet (Inception-v2) Top-1 Accuracy 72.1% # 14
Top-5 Accuracy 89.2% # 12
ImageNet Top-1 Accuracy 64.8% # 9
ImageNet Top-5 Accuracy 84.9% # 9


No methods listed for this paper. Add relevant methods here