AMR Parsing

49 papers with code • 8 benchmarks • 6 datasets

Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.

Most implemented papers

Core Semantic First: A Top-down Approach for AMR Parsing

jcyk/AMR-parser IJCNLP 2019

The output graph spans the nodes by the distance to the root, following the intuition of first grasping the main ideas then digging into more details.

Improving AMR Parsing with Sequence-to-Sequence Pre-training

xdqkid/S2S-AMR-Parser EMNLP 2020

In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.

Pushing the Limits of AMR Parsing with Self-Learning

IBM/transition-amr-parser Findings of the Association for Computational Linguistics 2020

Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.

AMR Parsing with Action-Pointer Transformer

ibm/graph_ensemble_learning NAACL 2021

In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.

One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline

SapienzaNLP/spring Proceedings of the AAAI Conference on Artificial Intelligence 2021

In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines integrating several different modules or components, and exploit graph recategorization, i. e., a set of content-specific heuristics that are developed on the basis of the training set.

Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing

Heidelberg-NLP/simple-xamr ACL (IWPT) 2021

In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations.

Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing

headacheboy/cross-lingual-amr-parsing Findings (ACL) 2021

We argue that the misprediction of concepts is due to the high relevance between English tokens and AMR concepts.

Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of AMR Alignments

ablodge/leamr ACL 2021

We present algorithms for aligning components of Abstract Meaning Representation (AMR) graphs to spans in English sentences.