Models that learn to label each image (i.e. cluster the dataset into its ground truth classes) without seeing the ground truth labels.
Image credit: ImageNet clustering results of SCAN: Learning to Classify Images without Labels (ECCV 2020)
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
Ranked #2 on Image Generation on Stanford Dogs
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.
Ranked #4 on Unsupervised Image Classification on MNIST
The method is not specialised to computer vision and operates on any paired dataset samples; in our experiments we use random transforms to obtain a pair from each image.
Ranked #1 on Unsupervised Semantic Segmentation on COCO-Stuff-3
First, a self-supervised task from representation learning is employed to obtain semantically meaningful features.
Ranked #1 on Image Clustering on ImageNet
Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms.
Ranked #3 on Unsupervised Image Classification on SVHN (using extra training data)
Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation.
Ranked #2 on Unsupervised Image Classification on SVHN (using extra training data)
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data.
Ranked #2 on Unsupervised Image Classification on ImageNet
Unsupervised image clustering methods often introduce alternative objectives to indirectly train the model and are subject to faulty predictions and overconfident results.
Ranked #1 on Unsupervised Image Classification on STL-10
Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model.
Ranked #5 on Unsupervised Image Classification on MNIST