AMR-to-Text Generation

15 papers with code • 5 benchmarks • 5 datasets

Abstract Meaning Representation (AMR) is a directed graph of labeled concepts and relations that captures sentence semantics. The propositional meaning behind its concepts abstracts away lexical properties. AMR is tree-like in structure as it has a single root node and few children with multiple parents. The goal of AMR-to-Text Generation is to recover the original sentence realization given an AMR. This task can be seen as the reverse of the structured prediction found in AMR parsing.

Source: AMR-to-Text Generation with Cache Transition Systems

Most implemented papers

Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation

yanzhang92/LDGCNs EMNLP 2020

With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity.

Structural Adapters in Pretrained Language Models for AMR-to-text Generation

ukplab/structadapt EMNLP 2021

Pretrained language models (PLM) have recently advanced graph-to-text generation, where the input graph is linearized into a sequence and fed into the PLM to obtain its representation.

One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline

SapienzaNLP/spring Proceedings of the AAAI Conference on Artificial Intelligence 2021

In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines integrating several different modules or components, and exploit graph recategorization, i. e., a set of content-specific heuristics that are developed on the basis of the training set.

Smelting Gold and Silver for Improved Multilingual AMR-to-Text Generation

ukplab/m-amr2text EMNLP 2021

Recent work on multilingual AMR-to-text generation has exclusively focused on data augmentation strategies that utilize silver AMR.