Low-shot learning with large-scale diffusion

This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time. This setup is often referred to as low-shot learning, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes for which training examples are abundant. We consider a semi-supervised setting based on a large collection of images to support label propagation. This is possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling label propagation up to hundred millions of images leads to state of the art accuracy in the low-shot learning regime.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Image Classification ImageNet-FS (1-shot, novel) LSD (ResNet-50) Top-5 Accuracy (%) 57.7 # 6
Few-Shot Image Classification ImageNet-FS (2-shot, novel) LSD (ResNet-50) Top-5 Accuracy (%) 66.9 # 7
Few-Shot Image Classification ImageNet-FS (5-shot, all) LSD (ResNet-50) Top-5 Accuracy (%) 73.8 # 8

Methods


No methods listed for this paper. Add relevant methods here