Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Sequence-to-sequence models have shown strong performance across a broad range of applications.
AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences.
We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs).
The first extension com-bines the smatch scoring script with the C6. 0 rule-based classifier to produce a human-readable report on the error patterns frequency observed in the scored AMR graphs.
The output graph spans the nodes by the distance to the root, following the intuition of first grasping the main ideas then digging into more details.