Search Results for author: Tahira Naseem

Found 27 papers, 10 papers with code

Pushing the Limits of AMR Parsing with Self-Learning

1 code implementation Findings of the Association for Computational Linguistics 2020 Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi Reddy, Radu Florian, Salim Roukos

Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.

AMR Parsing Machine Translation +4

Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

1 code implementation EMNLP 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-suk Lee, Radu Florian, Salim Roukos

We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.

Ranked #9 on AMR Parsing on LDC2017T10 (using extra training data)

AMR Parsing Sentence

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

2 code implementations NAACL 2022 Young-suk Lee, Ramon Fernandez Astudillo, Thanh Lam Hoang, Tahira Naseem, Radu Florian, Salim Roukos

AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.

 Ranked #1 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Data Augmentation +3

Inducing and Using Alignments for Transition-based AMR Parsing

1 code implementation NAACL 2022 Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo

These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints.

AMR Parsing

DocAMR: Multi-Sentence AMR Representation and Evaluation

1 code implementation NAACL 2022 Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman, Young-suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, Nathan Schneider

Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.

coreference-resolution Sentence

AMR Parsing with Action-Pointer Transformer

1 code implementation NAACL 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian

In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.

AMR Parsing Hard Attention +2

Structural Guidance for Transformer Language Models

1 code implementation ACL 2021 Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo

Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.

Language Modelling

Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning

no code implementations ACL 2019 Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, Miguel Ballesteros

Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs.

AMR Parsing reinforcement-learning +1

Bootstrapping Multilingual AMR with Contextual Word Alignments

no code implementations EACL 2021 Janaki Sheth, Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Radu Florian, Salim Roukos, Todd Ward

We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision.

Multilingual Word Embeddings Word Alignment +1

A Two-Stage Approach towards Generalization in Knowledge Base Question Answering

no code implementations10 Nov 2021 Srinivas Ravishankar, June Thai, Ibrahim Abdelaziz, Nandana Mihidukulasooriya, Tahira Naseem, Pavan Kapanipathi, Gaetano Rossiello, Achille Fokoue

Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes.

Knowledge Base Question Answering Knowledge Graphs +3

Learning to Transpile AMR into SPARQL

no code implementations15 Dec 2021 Mihaela Bornea, Ramon Fernandez Astudillo, Tahira Naseem, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Pavan Kapanipathi, Radu Florian, Salim Roukos

We propose a transition-based system to transpile Abstract Meaning Representation (AMR) into SPARQL for Knowledge Base Question Answering (KBQA).

Knowledge Base Question Answering Semantic Parsing

AMR Parsing with Instruction Fine-tuned Pre-trained Language Models

no code implementations24 Apr 2023 Young-suk Lee, Ramón Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos

Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks.

AMR Parsing Semantic Role Labeling

BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback

no code implementations4 Feb 2024 Gaurav Pandey, Yatin Nandwani, Tahira Naseem, Mayank Mishra, Guangxuan Xu, Dinesh Raghu, Sachindra Joshi, Asim Munawar, Ramón Fernandez Astudillo

Following the success of Proximal Policy Optimization (PPO) for Reinforcement Learning from Human Feedback (RLHF), new techniques such as Sequence Likelihood Calibration (SLiC) and Direct Policy Optimization (DPO) have been proposed that are offline in nature and use rewards in an indirect manner.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.