Graph Pre-training for AMR Parsing and Generation

ACL 2022  ยท  Xuefeng Bai, Yulong Chen, Yue Zhang ยท

Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pre-trained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs.

PDF Abstract ACL 2022 PDF ACL 2022 Abstract

Results from the Paper


 Ranked #1 on AMR-to-Text Generation on Bio (BLEU metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
AMR Parsing Bio AMRBART large Smatch 63.2 # 2
AMR-to-Text Generation Bio AMRBART large BLEU 20.7 # 1
AMR Parsing LDC2017T10 AMRBART large Smatch 85.4 # 6
AMR-to-Text Generation LDC2017T10 AMRBART large BLEU 49.8 # 2
METEOR 42.6 # 3
ChrF++ 76.2 # 2
AMR-to-Text Generation LDC2020T02 AMRBART large BLEU 49.2 # 2
ChrF++ 76.1 # 2
METEOR 42.3 # 3
AMR Parsing LDC2020T02 AMRBART large Smatch 84.2 # 6
AMR Parsing New3 AMRBART large Smatch 76.9 # 1
AMR-to-Text Generation New3 AMRBART large BLEU 44.8 # 1
AMR-to-Text Generation The Little Prince AMRBART large Bleu 29.1 # 1
AMR Parsing The Little Prince AMRBART large Smatch 79.8 # 1

Methods


No methods listed for this paper. Add relevant methods here