Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level Representation

NAACL 2019  ยท  Radu Tudor Ionescu, Andrei M. Butnaru ยท

In this paper, we propose a novel representation for text documents based on aggregating word embedding vectors into document embeddings. Our approach is inspired by the Vector of Locally-Aggregated Descriptors used for image representation, and it works as follows. First, the word embeddings gathered from a collection of documents are clustered by k-means in order to learn a codebook of semnatically-related word embeddings. Each word embedding is then associated to its nearest cluster centroid (codeword). The Vector of Locally-Aggregated Word Embeddings (VLAWE) representation of a document is then computed by accumulating the differences between each codeword vector and each word vector (from the document) associated to the respective codeword. We plug the VLAWE representation, which is learned in an unsupervised manner, into a classifier and show that it is useful for a diverse set of text classification tasks. We compare our approach with a broad range of recent state-of-the-art methods, demonstrating the effectiveness of our approach. Furthermore, we obtain a considerable improvement on the Movie Review data set, reporting an accuracy of 93.3%, which represents an absolute gain of 10% over the state-of-the-art approach. Our code is available at https://github.com/raduionescu/vlawe-boswe/.

PDF Abstract NAACL 2019 PDF NAACL 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentiment Analysis MR VLAWE Accuracy 93.3 # 1
Text Classification MR VLAWE Accuracy 93.3 # 1
Multi-Label Text Classification Reuters-21578 VLAWE Micro-F1 89.3 # 6
Document Classification Reuters-21578 VLAWE F1 89.3 # 2
Subjectivity Analysis SUBJ VLAWE Accuracy 95.0 # 6
Text Classification TREC-6 VLAWE Error 5.8 # 11

Methods


No methods listed for this paper. Add relevant methods here