Unsupervised Feature Learning via Non-Parametric Instance Discrimination

CVPR 2018  ·  Zhirong Wu, Yuanjun Xiong, Stella X. Yu, Dahua Lin ·

Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so. We study whether this observation can be extended beyond the conventional domain of supervised learning: Can we learn a good feature representation that captures apparent similarity among instances, instead of classes, by merely asking the feature to be discriminative of individual instances? We formulate this intuition as a non-parametric classification problem at the instance-level, and use noise-contrastive estimation to tackle the computational challenges imposed by the large number of instance classes. Our experimental results demonstrate that, under unsu- pervised learning settings, our method surpasses the state-of-the-art on ImageNet classification by a large margin. Our method is also remarkable for consistently improving test performance with more training data and better network architectures. By fine-tuning the learned feature, we further obtain competitive results for semi-supervised learning and object detection tasks. Our non-parametric model is highly compact: With 128 features per image, our method requires only 600MB storage for a million images, enabling fast nearest neighbour retrieval at the run time.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Semi-Supervised Image Classification ImageNet - 10% labeled data Instance Discrimination Top 5 Accuracy 77.40% # 50
Semi-Supervised Image Classification ImageNet - 1% labeled data Instance Discrimination (ResNet-50) Top 5 Accuracy 39.20% # 40

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Semi-Supervised Image Classification ImageNet - 10% labeled data InstDisc (ResNet-50) Top 5 Accuracy 77.4% # 50

Methods