Paper

Deep Mixtures of Unigrams for uncovering Topics in Textual Data

Mixtures of Unigrams are one of the simplest and most efficient tools for clustering textual data, as they assume that documents related to the same topic have similar distributions of terms, naturally described by Multinomials. When the classification task is particularly challenging, such as when the document-term matrix is high-dimensional and extremely sparse, a more composite representation can provide better insight on the grouping structure. In this work, we developed a deep version of mixtures of Unigrams for the unsupervised classification of very short documents with a large number of terms, by allowing for models with further deeper latent layers; the proposal is derived in a Bayesian framework. The behaviour of the Deep Mixtures of Unigrams is empirically compared with that of other traditional and state-of-the-art methods, namely $k$-means with cosine distance, $k$-means with Euclidean distance on data transformed according to Semantic Analysis, Partition Around Medoids, Mixture of Gaussians on semantic-based transformed data, hierarchical clustering according to Ward's method with cosine dissimilarity, Latent Dirichlet Allocation, Mixtures of Unigrams estimated via the EM algorithm, Spectral Clustering and Affinity Propagation clustering. The performance is evaluated in terms of both correct classification rate and Adjusted Rand Index. Simulation studies and real data analysis prove that going deep in clustering such data highly improves the classification accuracy.

Results in Papers With Code
(↓ scroll down to see all results)