Multiplicative LSTM for sequence modelling

26 Sep 2016  ·  Ben Krause, Liang Lu, Iain Murray, Steve Renals ·

We introduce multiplicative LSTM (mLSTM), a recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. mLSTM is characterised by its ability to have different recurrent transition functions for each possible input, which we argue makes it more expressive for autoregressive density estimation. We demonstrate empirically that mLSTM outperforms standard LSTM and its deep variants for a range of character level language modelling tasks. In this version of the paper, we regularise mLSTM to achieve 1.27 bits/char on text8 and 1.24 bits/char on Hutter Prize. We also apply a purely byte-level mLSTM on the WikiText-2 dataset to achieve a character level entropy of 1.26 bits/char, corresponding to a word level perplexity of 88.8, which is comparable to word level LSTMs regularised in similar ways on the same task.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling enwik8 Large mLSTM Bit per Character (BPC) 1.24 # 34
Number of params 46M # 24
Language Modelling Hutter Prize Large mLSTM +emb +WN +VD Bit per Character (BPC) 1.24 # 14
Number of params 46M # 10
Language Modelling Text8 Large mLSTM +emb +WN +VD Bit per Character (BPC) 1.27 # 17
Number of params 45M # 8
Language Modelling Text8 Unregularised mLSTM Bit per Character (BPC) 1.40 # 21
Number of params 45M # 8

Methods