Adaptive Input Representations for Neural Language Modeling

ICLR 2019  ·  Alexei Baevski, Michael Auli ·

We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling One Billion Word Adaptive Input Very Large PPL 23.02 # 5
Number of params 1.0B # 1
Validation perplexity 22.92 # 2
Language Modelling One Billion Word Adaptive Input Large PPL 23.91 # 9
Number of params 0.46B # 1
Validation perplexity 23.83 # 3
Language Modelling WikiText-103 Transformer (Adaptive inputs) Validation perplexity 17.97 # 15
Test perplexity 18.70 # 40
Number of params 247M # 19

Methods