Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph.
Ranked #2 on AMR Parsing on LDC2014T12 (F1 Full metric)
AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences.
Ranked #1 on AMR Parsing on LDC2015E86
We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs).
Ranked #9 on AMR Parsing on LDC2017T10
In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.
Ranked #1 on AMR Parsing on LDC2017T10