Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.
We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.
Ranked #8 on AMR Parsing on LDC2017T10 (using extra training data)
Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.
In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.
Ranked #1 on AMR Parsing on LDC2014T12
We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities.
We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs.
Ranked #4 on Part-Of-Speech Tagging on Penn Treebank