A Fast Unified Model for Parsing and Sentence Understanding

Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two key technical problems that make them slow and unwieldy for large-scale NLP tasks: they usually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducing the Stack-augmented Parser-Interpreter Neural Network (SPINN), which combines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shift-reduce parser. Our model supports batched computation for a speedup of up to 25 times over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.

PDF Abstract ACL 2016 PDF ACL 2016 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Inference SNLI 300D SPINN-PI encoders % Test Accuracy 83.2 # 87
% Train Accuracy 89.2 # 51
Parameters 3.7m # 4
Natural Language Inference SNLI 300D LSTM encoders % Test Accuracy 80.6 # 91
% Train Accuracy 83.9 # 70
Parameters 3.0m # 4

Methods


No methods listed for this paper. Add relevant methods here