FRAGE: Frequency-Agnostic Word Representation

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks. Although it is widely accepted that words with similar semantics should be close to each other in the embedding space, we find that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space, and the embedding of a rare word and a popular word can be far from each other even if they are semantically similar. This makes learned word embeddings ineffective, especially for rare words, and consequently limits the performance of these neural network models. In this paper, we develop a neat, simple yet effective way to learn \emph{FRequency-AGnostic word Embedding} (FRAGE) using adversarial training. We conducted comprehensive studies on ten datasets across four natural language processing tasks, including word similarity, language modeling, machine translation and text classification. Results show that with FRAGE, we achieve higher performance than the baselines in all tasks.

PDF Abstract NeurIPS 2018 PDF NeurIPS 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Machine Translation IWSLT2015 German-English Transformer with FRAGE BLEU score 33.97 # 3
Language Modelling Penn Treebank (Word Level) FRAGE + AWD-LSTM-MoS + dynamic eval Validation perplexity 47.38 # 5
Test perplexity 46.54 # 7
Params 22M # 23
Language Modelling WikiText-2 FRAGE + AWD-LSTM-MoS + dynamic eval Validation perplexity 40.85 # 5
Test perplexity 39.14 # 13
Number of params 35M # 12
Machine Translation WMT2014 English-German Transformer Big with FRAGE BLEU score 29.11 # 32
Hardware Burden None # 1
Operations per network pass None # 1

Methods


No methods listed for this paper. Add relevant methods here