1 code implementation • NAACL 2022 • Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo
These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints.
no code implementations • 18 Apr 2022 • Dung Thai, Srinivas Ravishankar, Ibrahim Abdelaziz, Mudit Chaudhary, Nandana Mihindukulasooriya, Tahira Naseem, Rajarshi Das, Pavan Kapanipathi, Achille Fokoue, Andrew McCallum
Yet, in many question answering applications coupled with knowledge bases, the sparse nature of KBs is often overlooked.
1 code implementation • NAACL 2022 • Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman, Young-suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, Nathan Schneider
Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.
no code implementations • 15 Dec 2021 • Mihaela Bornea, Ramon Fernandez Astudillo, Tahira Naseem, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Pavan Kapanipathi, Radu Florian, Salim Roukos
We propose a transition-based system to transpile Abstract Meaning Representation (AMR) into SPARQL for Knowledge Base Question Answering (KBQA).
2 code implementations • NAACL 2022 • Young-suk Lee, Ramon Fernandez Astudillo, Thanh Lam Hoang, Tahira Naseem, Radu Florian, Salim Roukos
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.
Ranked #1 on
AMR Parsing
on Bio
(using extra training data)
no code implementations • 10 Nov 2021 • Srinivas Ravishankar, June Thai, Ibrahim Abdelaziz, Nandana Mihidukulasooriya, Tahira Naseem, Pavan Kapanipathi, Gaetano Rossiello, Achille Fokoue
Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes.
1 code implementation • EMNLP 2021 • Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-suk Lee, Radu Florian, Salim Roukos
We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.
Ranked #8 on
AMR Parsing
on LDC2017T10
(using extra training data)
no code implementations • 16 Aug 2021 • Gaetano Rossiello, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Mihaela Bornea, Alfio Gliozzo, Tahira Naseem, Pavan Kapanipathi
Relation linking is essential to enable question answering over knowledge bases.
Ranked #1 on
Relation Linking
on QALD-9
no code implementations • ACL 2021 • Tahira Naseem, Srinivas Ravishankar, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Young-suk Lee, Pavan Kapanipathi, Salim Roukos, Alfio Gliozzo, Alexander Gray
Relation linking is a crucial component of Knowledge Base Question Answering systems.
1 code implementation • ACL 2021 • Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo
Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.
1 code implementation • NAACL 2021 • Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian
In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.
Ranked #1 on
AMR Parsing
on LDC2014T12
no code implementations • EACL 2021 • Janaki Sheth, Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Radu Florian, Salim Roukos, Todd Ward
We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision.
1 code implementation • Findings (ACL) 2021 • Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Salim Roukos, Alexander Gray, Ramon Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, Dinesh Garg, Alfio Gliozzo, Sairam Gurajada, Hima Karanam, Naweed Khan, Dinesh Khandelwal, Young-suk Lee, Yunyao Li, Francois Luus, Ndivhuwo Makondo, Nandana Mihindukulasooriya, Tahira Naseem, Sumit Neelam, Lucian Popa, Revanth Reddy, Ryan Riegel, Gaetano Rossiello, Udit Sharma, G P Shrivatsa Bhargav, Mo Yu
Knowledge base question answering (KBQA)is an important task in Natural Language Processing.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ramon Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, Radu Florian
Modeling the parser state is key to good performance in transition-based parsing.
Ranked #15 on
AMR Parsing
on LDC2017T10
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi Reddy, Radu Florian, Salim Roukos
Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.
Ranked #2 on
AMR Parsing
on LDC2014T12
no code implementations • 15 Sep 2020 • Parul Awasthy, Tahira Naseem, Jian Ni, Taesun Moon, Radu Florian
The task of event detection and classification is central to most information retrieval applications.
1 code implementation • ACL 2020 • Manuel Mager, Ramon Fernandez Astudillo, Tahira Naseem, Md. Arafat Sultan, Young-suk Lee, Radu Florian, Salim Roukos
Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs.
Ranked #8 on
AMR-to-Text Generation
on LDC2017T10
no code implementations • ACL 2019 • Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, Miguel Ballesteros
Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs.
Ranked #20 on
AMR Parsing
on LDC2017T10
no code implementations • CONLL 2018 • Hui Wan, Tahira Naseem, Young-suk Lee, Vittorio Castelli, Miguel Ballesteros
This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies.
no code implementations • 15 Jan 2014 • Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, Regina Barzilay
We demonstrate the effectiveness of multilingual learning for unsupervised part-of-speech tagging.