Unsupervised Visual Representation Learning with Increasing Object Shape Bias

17 Nov 2019  ·  Zhibo Wang, Shen Yan, XiaoYu Zhang, Niels Lobo ·

(Very early draft)Traditional supervised learning keeps pushing convolution neural network(CNN) achieving state-of-art performance. However, lack of large-scale annotation data is always a big problem due to the high cost of it, even ImageNet dataset is over-fitted by complex models now. The success of unsupervised learning method represented by the Bert model in natural language processing(NLP) field shows its great potential. And it makes that unlimited training samples becomes possible and the great universal generalization ability changes NLP research direction directly. In this article, we purpose a novel unsupervised learning method based on contrastive predictive coding. Under that, we are able to train model with any non-annotation images and improve model's performance to reach state-of-art performance at the same level of model complexity. Beside that, since the number of training images could be unlimited amplification, an universal large-scale pre-trained computer vision model is possible in the future.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods