Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance, including generation of synthetic text and AMR annotations as well as refinement of actions oracle. We show that, without any additional human annotations, these techniques improve an already performant parser and achieve state-of-the-art results on AMR 1.0 and AMR 2.0.

PDF Abstract Findings of 2020 PDF Findings of 2020 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
AMR Parsing LDC2014T12 stack-Transformer + self-learning (IBM) F1 Full 78.2 # 2
AMR Parsing LDC2017T10 stack-Transformer + self-learning (IBM) Smatch 81.3 # 16

Methods


No methods listed for this paper. Add relevant methods here