Structural Embedding of Syntactic Trees for Machine Comprehension

EMNLP 2017  ·  Rui Liu, Junjie Hu, Wei Wei, Zi Yang, Eric Nyberg ·

Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees. In this paper, we propose structural embedding of syntactic trees (SEST), an algorithm framework to utilize structured information and encode them into vector representations that can boost the performance of algorithms for the machine comprehension. We evaluate our approach using a state-of-the-art neural attention model on the SQuAD dataset. Experimental results demonstrate that our model can accurately identify the syntactic boundaries of the sentences and extract answers that are syntactically coherent over the baseline methods.

PDF Abstract EMNLP 2017 PDF EMNLP 2017 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SQuAD1.1 SEDT (ensemble model) EM 74.090 # 131
F1 81.761 # 138
Question Answering SQuAD1.1 SEDT+BiDAF (ensemble) EM 73.723 # 135
F1 81.530 # 141
Question Answering SQuAD1.1 SEDT+BiDAF (single model) EM 68.478 # 165
F1 77.971 # 170
Question Answering SQuAD1.1 SEDT (single model) EM 68.163 # 168
F1 77.527 # 174
Question Answering SQuAD1.1 dev SECT-LSTM EM 67.65 # 42
F1 77.19 # 45
Question Answering SQuAD1.1 dev SEDT-LSTM EM 67.89 # 40
F1 77.42 # 43

Methods


No methods listed for this paper. Add relevant methods here