A Graph-to-Sequence Model for AMR-to-Text Generation

ACL 2018  ·  Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea ·

The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although being able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.

PDF Abstract ACL 2018 PDF ACL 2018 Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


 Ranked #1 on Graph-to-Sequence on LDC2015E86: (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Graph-to-Sequence LDC2015E86: GRN BLEU 33.6 # 1
Text Generation LDC2016E25 Graph2Seq BLEU 22 # 1

Methods