Reading and Thinking: Re-read LSTM Unit for Textual Entailment Recognition

COLING 2016  ·  Lei Sha, Baobao Chang, Zhifang Sui, Sujian Li ·

Recognizing Textual Entailment (RTE) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate deep neural network methods for the RTE task. Previous neural network based methods usually try to encode the two sentences (premise and hypothesis) and send them together into a multi-layer perceptron to get their entailment type, or use LSTM-RNN to link two sentences together while using attention mechanic to enhance the model{'}s ability. In this paper, we propose to use the re-read mechanic, which means to read the premise again and again while reading the hypothesis. After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis. On the contrary, a better understanding of the hypothesis can also affect the understanding of the premise. With the alternative re-read process, the model can {``}think{''} of a better decision of entailment type. We designed a new LSTM unit called re-read LSTM (rLSTM) to implement this {``}thinking{''} process. Experiments show that we achieve results better than current state-of-the-art equivalents.

PDF Abstract COLING 2016 PDF COLING 2016 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Natural Language Inference SNLI 300D re-read LSTM % Test Accuracy 87.5 # 42
% Train Accuracy 90.7 # 42
Parameters 2.0m # 4

Methods