Dynamic Evaluation of Transformer Language Models

17 Apr 2019  ยท  Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals ยท

This research note combines two methods that have recently improved the state of the art in language modeling: Transformers and dynamic evaluation. Transformers use stacked layers of self-attention that allow them to capture long range dependencies in sequential data. Dynamic evaluation fits models to the recent sequence history, allowing them to assign higher probabilities to re-occurring sequential patterns. By applying dynamic evaluation to Transformer-XL models, we improve the state of the art on enwik8 from 0.99 to 0.94 bits/char, text8 from 1.08 to 1.04 bits/char, and WikiText-103 from 18.3 to 16.4 perplexity points.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Language Modelling enwik8 Transformer-XL (24 layers, RMS dynamic eval, decay) Bit per Character (BPC) 0.940 # 2
Number of params 277M # 2
Language Modelling Hutter Prize Transformer-XL + RMS dynamic eval Bit per Character (BPC) 0.94 # 1
Number of params 277M # 1
Language Modelling Text8 Transformer-XL + RMS dynamic eval + decay Bit per Character (BPC) 1.038 # 3
Number of params 277M # 2
Language Modelling WikiText-103 Transformer-XL (SGD dynamic eval) Validation perplexity 16.3 # 7
Test perplexity 17.0 # 19
Number of params 257M # 12
Language Modelling WikiText-103 Transformer-XL (RMS dynamic eval) Validation perplexity 15.8 # 3
Test perplexity 16.4 # 13
Number of params 257M # 12

Methods


No methods listed for this paper. Add relevant methods here