Improved Language Modeling by Decoding the Past

ACL 2019  ·  Siddhartha Brahma ·

Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax. We also show gains by using PDR in combination with a mixture-of-softmaxes, achieving a word level perplexity of 53.8 and 60.5 on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. These results constitute a new state-of-the-art in their respective settings.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling Penn Treebank (Character Level) Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. Bit per Character (BPC) 1.169 # 6
Number of params 13.8M # 8
Language Modelling Penn Treebank (Word Level) Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. Validation perplexity 48.0 # 6
Test perplexity 47.3 # 9
Params 22M # 23
Language Modelling WikiText-2 Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. Validation perplexity 42.0 # 6
Test perplexity 40.3 # 14
Number of params 35M # 12

Methods


No methods listed for this paper. Add relevant methods here