Search Results for author: Ramón Fernandez Astudillo

Found 11 papers, 7 papers with code

Structured Chain-of-Thought Prompting for Few-Shot Generation of Content-Grounded QA Conversations

no code implementations19 Feb 2024 Md Arafat Sultan, Jatin Ganhotra, Ramón Fernandez Astudillo

We introduce a structured chain-of-thought (SCoT) prompting approach to generating content-grounded multi-turn question-answer conversations using a pre-trained large language model (LLM).

Hallucination Language Modelling +1

BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback

no code implementations4 Feb 2024 Gaurav Pandey, Yatin Nandwani, Tahira Naseem, Mayank Mishra, Guangxuan Xu, Dinesh Raghu, Sachindra Joshi, Asim Munawar, Ramón Fernandez Astudillo

Distribution matching methods for language model alignment such as Generation with Distributional Control (GDC) and Distributional Policy Gradient (DPG) have not received the same level of attention in reinforcement learning from human feedback (RLHF) as contrastive methods such as Sequence Likelihood Calibration (SLiC), Direct Preference Optimization (DPO) and its variants.

Language Modelling Text Generation

Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs

1 code implementation21 Oct 2023 Young-suk Lee, Md Arafat Sultan, Yousef El-Kurdi, Tahira Naseem Asim Munawar, Radu Florian, Salim Roukos, Ramón Fernandez Astudillo

Using in-context learning (ICL) for data generation, techniques such as Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023) can train strong conversational agents with only a small amount of human supervision.

In-Context Learning

AMR Parsing with Instruction Fine-tuned Pre-trained Language Models

no code implementations24 Apr 2023 Young-suk Lee, Ramón Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos

Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks.

AMR Parsing Semantic Role Labeling

DocAMR: Multi-Sentence AMR Representation and Evaluation

1 code implementation NAACL 2022 Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman, Young-suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, Nathan Schneider

Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.

coreference-resolution Sentence

Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

1 code implementation EMNLP 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-suk Lee, Radu Florian, Salim Roukos

We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.

Ranked #9 on AMR Parsing on LDC2017T10 (using extra training data)

AMR Parsing Decoder +1

Structural Guidance for Transformer Language Models

1 code implementation ACL 2021 Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo

Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.

Language Modelling

AMR Parsing with Action-Pointer Transformer

1 code implementation NAACL 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian

In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.

AMR Parsing Hard Attention +2

Cannot find the paper you are looking for? You can Submit a new open access paper.