AMR Parsing

49 papers with code • 8 benchmarks • 6 datasets

Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.

Latest papers with no code

XLPT-AMR: Cross-Lingual Pre-Training via Multi-Task Learning for Zero-Shot AMR Parsing and Text Generation

no code yet • ACL 2021

We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages.

SGL: Speaking the Graph Languages of Semantic Parsing via Multilingual Translation

no code yet • NAACL 2021

Graph-based semantic parsing aims to represent textual meaning through directed graphs.

Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

no code yet • 18 May 2021

We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0.

Parsing Indonesian Sentence into Abstract Meaning Representation using Machine Learning Approach

no code yet • 5 Mar 2021

Pair prediction uses dependency parsing component to get the edges between the words for the AMR.

AMR Parsing with Action-Pointer Transformer

no code yet • 24 Nov 2020

Abstract Meaning Representation parsing belongs to a category of sentence-to-graph prediction tasks where the target graph is not explicitly linked to the sentence tokens.

JBNU at MRP 2020: AMR Parsing Using a Joint State Model for Graph-Sequence Iterative Inference

no code yet • CONLL 2020

This paper describes the Jeonbuk National University (JBNU) system for the 2020 shared task on Cross-Framework Meaning Representation Parsing at the Conference on Computational Natural Language Learning.

A Differentiable Relaxation of Graph Segmentation and Alignment for AMR Parsing

no code yet • EMNLP 2021

In contrast, we treat both alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.

AMR Parsing with Latent Structural Information

no code yet • ACL 2020

Abstract Meaning Representations (AMRs) capture sentence-level semantics structural representations to broad-coverage natural sentences.

Constructing Web-Accessible Semantic Role Labels and Frames for Japanese as Additions to the NPCMJ Parsed Corpus

no code yet • LREC 2020

As part of constructing the NINJAL Parsed Corpus of Modern Japanese (NPCMJ), a web-accessible language resource, we are adding frame information for predicates, together with two types of semantic role labels that mark the contributions of arguments.

Broad-Coverage Semantic Parsing as Transduction

no code yet • IJCNLP 2019

We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations.