74 papers with code • 30 benchmarks • 18 datasets
Models that partition the dataset into semantically meaningful clusters without having access to the ground truth labels.
Image credit: ImageNet clustering results of SCAN: Learning to Classify Images without Labels (ECCV 2020)
LibrariesUse these libraries to find Image Clustering models and implementations
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms.
In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features.
The method is not specialised to computer vision and operates on any paired dataset samples; in our experiments we use random transforms to obtain a pair from each image.
In this paper, we propose and study an algorithm, called Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of low-dimensional subspaces.
We study a number of local and global manifold learning methods on both the raw data and autoencoded embedding, concluding that UMAP in our framework is best able to find the most clusterable manifold in the embedding, suggesting local manifold learning on an autoencoded embedding is effective for discovering higher quality discovering clusters.
Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks.
An Image Clustering Auto-Encoder Based on Predefined Evenly-Distributed Class Centroids and MMD Distance
The algorithm uses PEDCC (Predefined Evenly-Distributed Class Centroids) as the clustering centers, which ensures the inter-class distance of latent features is maximal, and adds data distribution constraint, data augmentation constraint, auto-encoder reconstruction constraint and Sobel smooth constraint to improve the clustering performance.
We explore the different roles of two fundamental concepts in graph theory, indegree and outdegree, in the context of clustering.