Search Results for author: Tahira Naseem

Found 29 papers, 10 papers with code

Latent Principle Discovery for Language Model Self-Improvement

no code implementations22 May 2025 Keshav Ramji, Tahira Naseem, Ramón Fernandez Astudillo

When language model (LM) users aim to improve the quality of its generations, it is crucial to specify concrete behavioral attributes that the model should strive to reflect.

Clustering Language Modeling +1

Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions

no code implementations9 May 2025 Dhruvesh Patel, Aishwarya Sahoo, Avinash Amballa, Tahira Naseem, Tim G. J. Rudner, Andrew McCallum

Masked Diffusion Models (MDMs) address some of these limitations, but the process of unmasking multiple tokens simultaneously in MDMs can introduce incoherences, and MDMs cannot handle arbitrary infilling constraints when the number of tokens to be filled in is not known in advance.

Denoising Position +1

BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback

no code implementations4 Feb 2024 Gaurav Pandey, Yatin Nandwani, Tahira Naseem, Mayank Mishra, Guangxuan Xu, Dinesh Raghu, Sachindra Joshi, Asim Munawar, Ramón Fernandez Astudillo

Distribution matching methods for language model alignment such as Generation with Distributional Control (GDC) and Distributional Policy Gradient (DPG) have not received the same level of attention in reinforcement learning from human feedback (RLHF) as contrastive methods such as Sequence Likelihood Calibration (SLiC), Direct Preference Optimization (DPO) and its variants.

Language Modeling Language Modelling +1

AMR Parsing with Instruction Fine-tuned Pre-trained Language Models

no code implementations24 Apr 2023 Young-suk Lee, Ramón Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos

Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks.

Abstract Meaning Representation AMR Parsing +2

Inducing and Using Alignments for Transition-based AMR Parsing

2 code implementations NAACL 2022 Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo

These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints.

Abstract Meaning Representation AMR Parsing

DocAMR: Multi-Sentence AMR Representation and Evaluation

1 code implementation NAACL 2022 Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman, Young-suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, Nathan Schneider

Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.

coreference-resolution Sentence

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

3 code implementations NAACL 2022 Young-suk Lee, Ramon Fernandez Astudillo, Thanh Lam Hoang, Tahira Naseem, Radu Florian, Salim Roukos

AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.

 Ranked #1 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Data Augmentation +3

A Two-Stage Approach towards Generalization in Knowledge Base Question Answering

no code implementations10 Nov 2021 Srinivas Ravishankar, June Thai, Ibrahim Abdelaziz, Nandana Mihidukulasooriya, Tahira Naseem, Pavan Kapanipathi, Gaetano Rossiello, Achille Fokoue

Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes.

Knowledge Base Question Answering Knowledge Graphs +3

Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

1 code implementation EMNLP 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-suk Lee, Radu Florian, Salim Roukos

We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.

Ranked #9 on AMR Parsing on LDC2017T10 (using extra training data)

Abstract Meaning Representation AMR Parsing +2

Structural Guidance for Transformer Language Models

1 code implementation ACL 2021 Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo

Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.

Language Modeling Language Modelling

AMR Parsing with Action-Pointer Transformer

1 code implementation NAACL 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian

In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.

Abstract Meaning Representation AMR Parsing +3

Bootstrapping Multilingual AMR with Contextual Word Alignments

no code implementations EACL 2021 Janaki Sheth, Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Radu Florian, Salim Roukos, Todd Ward

We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision.

Multilingual Word Embeddings Word Alignment +1

Pushing the Limits of AMR Parsing with Self-Learning

1 code implementation Findings of the Association for Computational Linguistics 2020 Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi Reddy, Radu Florian, Salim Roukos

Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.

Abstract Meaning Representation AMR Parsing +5

Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning

no code implementations ACL 2019 Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, Miguel Ballesteros

Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs.

AMR Parsing reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.