Language Models with Transformers

arXiv 2019  ·  Chenguang Wang, Mu Li, Alexander J. Smola ·

The Transformer architecture is superior to RNN-based models in computational efficiency. Recently, GPT and BERT demonstrate the efficacy of Transformer models on various NLP tasks using pre-trained language models on large-scale corpora. Surprisingly, these Transformer architectures are suboptimal for language model itself. Neither self-attention nor the positional encoding in the Transformer is able to efficiently incorporate the word-level sequential context crucial to language modeling. In this paper, we explore effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient. We propose Coordinate Architecture Search (CAS) to find an effective architecture through iterative refinement of the model. Experimental results on the PTB, WikiText-2, and WikiText-103 show that CAS achieves perplexities between 20.42 and 34.11 on all problems, i.e. on average an improvement of 12.0 perplexity units compared to state-of-the-art LSTMs. The source code is publicly available.

PDF Abstract

Results from the Paper


Ranked #2 on Language Modelling on Penn Treebank (Word Level) (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Language Modelling Penn Treebank (Word Level) BERT-Large-CAS Validation perplexity 36.1 # 1
Test perplexity 31.3 # 2
Params 395M # 3
Language Modelling WikiText-103 BERT-Large-CAS Validation perplexity 19.6 # 19
Test perplexity 20.4 # 43
Number of params 395M # 9
Language Modelling WikiText-2 BERT-Large-CAS Validation perplexity 37.7 # 2
Test perplexity 34.1 # 10
Number of params 395M # 4

Methods