Linguistic Information in Neural Semantic Parsing with Multiple Encoders

WS 2019  ·  Rik van Noord, Antonio Toral, Johan Bos ·

Recently, sequence-to-sequence models have achieved impressive performance on a number of semantic parsing tasks. However, they often do not exploit available linguistic resources, while these, when employed correctly, are likely to increase performance even further. Research in neural machine translation has shown that employing this information has a lot of potential, especially when using a multi-encoder setup. We employ a range of semantic and syntactic resources to improve performance for the task of Discourse Representation Structure Parsing. We show that (i) linguistic features can be beneficial for neural semantic parsing and (ii) the best method of adding these features is by using multiple encoders.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
DRS Parsing PMB-2.2.0 Character-level bi-LSTM seq2seq + linguistic features F1 86.8 # 3
DRS Parsing PMB-3.0.0 Character-level bi-LSTM seq2seq + linguistic features F1 87.7 # 2

Methods


No methods listed for this paper. Add relevant methods here