Sentence-State LSTM for Text Representation

ACL 2018  ·  Yue Zhang, Qi Liu, Linfeng Song ·

Bi-directional LSTMs are a powerful tool for text representation. On the other hand, they have been shown to suffer various limitations due to their sequential nature. We investigate an alternative LSTM structure for encoding text, which consists of a parallel state for each word. Recurrent steps are used to perform local and global information exchange between words simultaneously, rather than incremental reading of a sequence of words. Results on various classification and sequence labelling benchmarks show that the proposed model has strong representation power, giving highly competitive performances compared to stacked BiLSTM models with similar parameter numbers.

PDF Abstract ACL 2018 PDF ACL 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL 2003 (English) S-LSTM F1 91.57 # 58
Sentiment Analysis IMDb S-LSTM Accuracy 87.15 # 39
Sentiment Analysis MR S-LSTM Accuracy 76.2 # 16
Part-Of-Speech Tagging Penn Treebank S-LSTM Accuracy 97.55 # 10

Methods