AMR Parsing as Graph Prediction with Latent Alignment

ACL 2018  ·  Chunchuan Lyu, Ivan Titov ·

Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational auto-encoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).

PDF Abstract ACL 2018 PDF ACL 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
AMR Parsing LDC2015E86 Joint model Smatch 73.7 # 1
AMR Parsing LDC2017T10 Joint model Smatch 74.4 # 23

Methods


No methods listed for this paper. Add relevant methods here