Character-Level Language Modeling with Deeper Self-Attention

9 Aug 2018  ·  Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones ·

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Language Modelling enwik8 Transformer (64 layers) Bit per Character (BPC) 1.06 # 25
Number of params 235M # 5
Language Modelling Hutter Prize 64-layer Character Transformer Model Bit per Character (BPC) 1.06 # 8
Number of params 235M # 3
Language Modelling Hutter Prize 12-layer Character Transformer Model Bit per Character (BPC) 1.11 # 11
Number of params 44M # 13

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Language Modelling enwik8 64-layer Character Transformer Model Bit per Character (BPC) 1.11 # 29
Number of params 44M # 26
Language Modelling Text8 64-layer Character Transformer Model Bit per Character (BPC) 1.13 # 11
Number of params 235M # 4
Language Modelling Text8 12-layer Character Transformer Model Bit per Character (BPC) 1.18 # 13
Number of params 44M # 12

Methods