1 code implementation • NAACL 2022 • Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman, Young-suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, Nathan Schneider
Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.
1 code implementation • EMNLP 2021 • Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-suk Lee, Radu Florian, Salim Roukos
We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.
Ranked #8 on
AMR Parsing
on LDC2017T10
(using extra training data)
1 code implementation • ACL 2021 • Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo
Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.
1 code implementation • NAACL 2021 • Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian
In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.
Ranked #1 on
AMR Parsing
on LDC2014T12
8 code implementations • 5 Feb 2016 • André F. T. Martins, Ramón Fernandez Astudillo
We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities.
1 code implementation • EMNLP 2015 • Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W. black, Isabel Trancoso
We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs.
Ranked #4 on
Part-Of-Speech Tagging
on Penn Treebank