Search Results for author: Yao Dou

Found 8 papers, 2 papers with code

LENS: A Learnable Evaluation Metric for Text Simplification

1 code implementation19 Dec 2022 Mounica Maddela, Yao Dou, David Heineman, Wei Xu

Training learnable metrics using modern language models has recently emerged as a promising method for the automatic evaluation of machine translation.

Machine Translation Text Simplification

Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text

no code implementations ACL 2022 Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi

To support the broad range of real machine errors that can be identified by laypeople, the ten error categories of Scarecrow -- such as redundancy, commonsense errors, and incoherence -- are identified through several rounds of crowd annotation experiments without a predefined ontology.

Math Text Generation

Improving Large-scale Paraphrase Acquisition and Generation

no code implementations6 Oct 2022 Yao Dou, Chao Jiang, Wei Xu

This paper addresses the quality issues in existing Twitter-based paraphrase datasets, and discusses the necessity of using two separate definitions of paraphrase for identification and generation tasks.

Language Modelling Paraphrase Generation +2

Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA

no code implementations23 May 2023 David Heineman, Yao Dou, Mounica Maddela, Wei Xu

Large language models (e. g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems' specific strengths and weaknesses.

Sentence Text Simplification

Thresh: A Unified, Customizable and Deployable Platform for Fine-Grained Text Evaluation

1 code implementation14 Aug 2023 David Heineman, Yao Dou, Wei Xu

Additionally, we introduce a Python library to streamline the entire process from typology design and deployment to annotation processing.

Machine Translation Multi-Task Learning +1

Automatic and Human-AI Interactive Text Generation

no code implementations5 Oct 2023 Yao Dou, Philippe Laban, Claire Gardent, Wei Xu

In this tutorial, we focus on text-to-text generation, a class of natural language generation (NLG) tasks, that takes a piece of text as input and then generates a revision that is improved according to some specific criteria (e. g., readability or linguistic styles), while largely retaining the original meaning and the length of the text.

Paraphrase Generation Style Transfer +2

Reducing Privacy Risks in Online Self-Disclosures with Language Models

no code implementations16 Nov 2023 Yao Dou, Isadora Krsek, Tarek Naous, Anubha Kabra, Sauvik Das, Alan Ritter, Wei Xu

Motivated by the user feedback, we introduce the task of self-disclosure abstraction, which is paraphrasing disclosures into less specific terms while preserving their utility, e. g., "Im 16F" to "I'm a teenage girl".

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.