IPCL: Iterative Pseudo-Supervised Contrastive Learning to Improve Self-Supervised Feature Representation

Self-supervised learning with a contrastive batch approach has become a powerful tool for representation learning in computer vision. The performance of downstream tasks is proportional to the quality of visual features learned while self-supervised pre-training. The existing contrastive batch approaches heavily depend on data augmentation to learn latent information from unlabelled datasets. We argue that introducing the dataset’s intra-class variation in a contrastive batch approach improves visual representation quality further. In this paper, we propose a novel self-supervised learning approach named Iterative Pseudo-supervised Contrastive Learning (IPCL), which utilizes a balanced combination of image augmentations and pseudo-class information to improve the visual representation iteratively. Experimental results illustrate that our proposed method surpasses the baseline self-supervised method with the batch contrastive approach. It improves the visual representation quality over multiple datasets, leading to better performance on the downstream unsupervised image classification task.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Contrastive Learning CIFAR-10 IPCL (ResNet18) Accuracy (Top-1) 84.77 # 1
Unsupervised Image Classification CIFAR-10 IPCL (ResNet18) Accuracy 88.81 # 1
Contrastive Learning STL-10 IPCL (ResNet18) Accuracy (Top-1) 85.55 # 1
Unsupervised Image Classification STL-10 IPCL (ResNet18) Accuracy 80.91 # 1

Methods