Unsupervised Image Classification
28 papers with code • 7 benchmarks • 6 datasets
Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels.
Image credit: ImageNet clustering results of SCAN: Learning to Classify Images without Labels (ECCV 2020)
Benchmarks
These leaderboards are used to track progress in Unsupervised Image Classification
Libraries
Use these libraries to find Unsupervised Image Classification models and implementationsLatest papers
The VampPrior Mixture Model
Current clustering priors for deep latent variable models (DLVMs) require defining the number of clusters a-priori and are susceptible to poor initializations.
Improving Cross-domain Few-shot Classification with Multilayer Perceptron
Multilayer perceptron (MLP) has shown its capability to learn transferable representations in various downstream tasks, such as unsupervised image classification and supervised concept generalization.
Stable Cluster Discrimination for Deep Clustering
Meanwhile, one-stage methods are developed mainly for representation learning rather than clustering, where various constraints for cluster assignments are designed to avoid collapsing explicitly.
The Pursuit of Human Labeling: A New Perspective on Unsupervised Learning
Despite its simplicity, HUME outperforms a supervised linear classifier on top of self-supervised representations on the STL-10 dataset by a large margin and achieves comparable performance on the CIFAR-10 dataset.
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
We present a new method of self-supervised learning and knowledge distillation based on the multi-views and multi-representations (MV-MR).
Capsule Network based Contrastive Learning of Unsupervised Visual Representations
Capsule Networks have shown tremendous advancement in the past decade, outperforming the traditional CNNs in various task due to it's equivariant properties.
Loss Function Entropy Regularization for Diverse Decision Boundaries
This paper will present a straightforward method to modify a single unsupervised classification pipeline to automatically generate an ensemble of neural networks with varied decision boundaries to learn a more extensive feature set of classes.
DeepDPM: Deep Clustering With an Unknown Number of Clusters
Using a split/merge framework, a dynamic architecture that adapts to the changing K, and a novel loss, our proposed method outperforms existing nonparametric methods (both classical and deep ones).
iBOT: Image BERT Pre-Training with Online Tokenizer
We present a self-supervised framework iBOT that can perform masked prediction with an online tokenizer.
Self-Supervised Learning by Estimating Twin Class Distributions
To solve this problem, we propose to maximize the mutual information between the input and the class predictions.