Unsupervised Image Classification
28 papers with code • 7 benchmarks • 6 datasets
Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels.
Image credit: ImageNet clustering results of SCAN: Learning to Classify Images without Labels (ECCV 2020)
Benchmarks
These leaderboards are used to track progress in Unsupervised Image Classification
Libraries
Use these libraries to find Unsupervised Image Classification models and implementationsLatest papers with no code
Contrastive Knowledge Amalgamation for Unsupervised Image Classification
Current methods focus on coarsely aligning teachers and students in the common representation space, making it difficult for the student to learn the proper decision boundaries from a set of heterogeneous teachers.
ContraCluster: Learning to Classify without Labels by Contrastive Self-Supervision and Prototype-Based Semi-Supervision
The recent advances in representation learning inspire us to take on the challenging problem of unsupervised image classification tasks in a principled way.
Minimalistic Unsupervised Learning with the Sparse Manifold Transform
Though there remains a small performance gap between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised learning.
LatentGAN Autoencoder: Learning Disentangled Latent Distribution
In autoencoder, the encoder generally approximates the latent distribution over the dataset, and the decoder generates samples using this learned latent distribution.
Revisiting the Transferability of Supervised Pretraining: an MLP Perspective
The pretrain-finetune paradigm is a classical pipeline in visual learning.
GUIDED MCMC FOR SPARSE BAYESIAN MODELS TO DETECT RARE EVENTS IN IMAGES SANS LABELED DATA
After the steady-state is obtained for the underlying Markov chain, it is possible to compute the posterior probability of the presence of the rare event in a given image.
Combining pretrained CNN feature extractors to enhance clustering of complex natural images
First, extensive experiments are conducted and show that, for a given dataset, the choice of the CNN architecture for feature extraction has a huge impact on the final clustering.
AC-VAE: Learning Semantic Representation with VAE for Adaptive Clustering
Experimental evaluations show that the proposed method outperforms state-of-the-art representation learning methods in terms of neighbor clustering accuracy.
A Pseudo-labelling Auto-Encoder for unsupervised image classification
In this paper, we introduce a unique variant of the denoising Auto-Encoder and combine it with the perceptual loss to classify images in an unsupervised manner.
Unsupervised part representation by Flow Capsules
Capsule networks aim to parse images into a hierarchy of objects, parts and relations.