Search Results for author: Yao Dou

Found 10 papers, 3 papers with code

Improving Minimum Bayes Risk Decoding with Multi-Prompt

1 code implementation22 Jul 2024 David Heineman, Yao Dou, Wei Xu

While instruction fine-tuned LLMs are effective text generators, sensitivity to prompt construction makes performance unstable and sub-optimal in practice.

GPT-4 Jailbreaks Itself with Near-Perfect Success Using Self-Explanation

no code implementations21 May 2024 Govind Ramesh, Yao Dou, Wei Xu

Research on jailbreaking has been valuable for testing and understanding the safety and security issues of large language models (LLMs).

Automatic and Human-AI Interactive Text Generation

no code implementations5 Oct 2023 Yao Dou, Philippe Laban, Claire Gardent, Wei Xu

In this tutorial, we focus on text-to-text generation, a class of natural language generation (NLG) tasks, that takes a piece of text as input and then generates a revision that is improved according to some specific criteria (e. g., readability or linguistic styles), while largely retaining the original meaning and the length of the text.

Paraphrase Generation Style Transfer +2

Thresh: A Unified, Customizable and Deployable Platform for Fine-Grained Text Evaluation

1 code implementation14 Aug 2023 David Heineman, Yao Dou, Wei Xu

Additionally, we introduce a Python library to streamline the entire process from typology design and deployment to annotation processing.

Machine Translation Multi-Task Learning +1

Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA

no code implementations23 May 2023 David Heineman, Yao Dou, Mounica Maddela, Wei Xu

Large language models (e. g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems' specific strengths and weaknesses.

Sentence Text Simplification

LENS: A Learnable Evaluation Metric for Text Simplification

1 code implementation19 Dec 2022 Mounica Maddela, Yao Dou, David Heineman, Wei Xu

Training learnable metrics using modern language models has recently emerged as a promising method for the automatic evaluation of machine translation.

Machine Translation Text Simplification

Improving Large-scale Paraphrase Acquisition and Generation

no code implementations6 Oct 2022 Yao Dou, Chao Jiang, Wei Xu

This paper addresses the quality issues in existing Twitter-based paraphrase datasets, and discusses the necessity of using two separate definitions of paraphrase for identification and generation tasks.

Language Modelling Paraphrase Generation +2

Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text

no code implementations ACL 2022 Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi

To support the broad range of real machine errors that can be identified by laypeople, the ten error categories of Scarecrow -- such as redundancy, commonsense errors, and incoherence -- are identified through several rounds of crowd annotation experiments without a predefined ontology.

Math Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.