AMR Parsing
49 papers with code • 8 benchmarks • 6 datasets
Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.
Most implemented papers
Core Semantic First: A Top-down Approach for AMR Parsing
The output graph spans the nodes by the distance to the root, following the intuition of first grasping the main ideas then digging into more details.
Improving AMR Parsing with Sequence-to-Sequence Pre-training
In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.
Pushing the Limits of AMR Parsing with Self-Learning
Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.
Transition-based Parsing with Stack-Transformers
Modeling the parser state is key to good performance in transition-based parsing.
The Role of Reentrancies in Abstract Meaning Representation Parsing
Abstract Meaning Representation (AMR) parsing aims at converting sentences into AMR representations.
AMR Parsing with Action-Pointer Transformer
In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.
One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline
In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines integrating several different modules or components, and exploit graph recategorization, i. e., a set of content-specific heuristics that are developed on the basis of the training set.
Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations.
Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing
We argue that the misprediction of concepts is due to the high relevance between English tokens and AMR concepts.
Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of AMR Alignments
We present algorithms for aligning components of Abstract Meaning Representation (AMR) graphs to spans in English sentences.