Search Results for author: Yufei Tian

Found 17 papers, 10 papers with code

Detecting Machine-Generated Long-Form Content with Latent-Space Variables

no code implementations4 Oct 2024 Yufei Tian, Zeyu Pan, Nanyun Peng

The increasing capability of large language models (LLMs) to generate fluent long-form texts is presenting new challenges in distinguishing machine-generated outputs from human-written ones, which is crucial for ensuring authenticity and trustworthiness of expressions.

REFFLY: Melody-Constrained Lyrics Editing Model

no code implementations30 Aug 2024 Songyan Zhao, Bingxuan Li, Yufei Tian, Nanyun Peng

Automatic melody-to-lyric generation aims to produce lyrics that align with a given melody.

Are Large Language Models Capable of Generating Human-Level Narratives?

1 code implementation18 Jul 2024 Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng

This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.

Diversity

BOOST: Harnessing Black-Box Control to Boost Commonsense in LMs' Generation

no code implementations25 Oct 2023 Yufei Tian, Felix Zhang, Nanyun Peng

Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text.

Language Modelling Sentence

Evaluating Large Language Models on Controlled Generation Tasks

1 code implementation23 Oct 2023 Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma

While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks.

Question Generation Question-Generation +2

Unsupervised Melody-Guided Lyrics Generation

no code implementations12 May 2023 Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, Nanyun Peng

At inference time, we leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.

Text Generation

A Unified Framework for Pun Generation with Humor Principles

1 code implementation24 Oct 2022 Yufei Tian, Divyanshu Sheth, Nanyun Peng

We propose a unified framework to generate both homophonic and homographic puns to resolve the split-up in existing works.

AmbiPun: Generating Humorous Puns with Ambiguous Context

1 code implementation NAACL 2022 Anirudh Mittal, Yufei Tian, Nanyun Peng

In this paper, we propose a simple yet effective way to generate pun sentences that does not require any training on existing puns.

Reverse Dictionary

Zero-shot Sonnet Generation with Discourse-level Planning and Aesthetics Features

1 code implementation NAACL 2022 Yufei Tian, Nanyun Peng

Poetry generation, and creative language generation in general, usually suffers from the lack of large training data.

Sonnet Generation

Paraphrase Generation as Unsupervised Machine Translation

no code implementations COLING 2022 Xiaofei Sun, Yufei Tian, Yuxian Meng, Nanyun Peng, Fei Wu, Jiwei Li, Chun Fan

Then based on the paraphrase pairs produced by these UMT models, a unified surrogate model can be trained to serve as the final \sts model to generate paraphrases, which can be directly used for test in the unsupervised setup, or be finetuned on labeled datasets in the supervised setup.

Paraphrase Generation Sentence +3

Aspect and Opinion Aware Abstractive Review Summarization with Reinforced Hard Typed Decoder

no code implementations13 Apr 2020 Yufei Tian, Jianfei Yu, Jing Jiang

In this paper, we study abstractive review summarization. Observing that review summaries often consist of aspect words, opinion words and context words, we propose a two-stage reinforcement learning approach, which first predicts the output word type from the three types, and then leverages the predicted word type to generate the final word distribution. Experimental results on two Amazon product review datasets demonstrate that our method can consistently outperform several strong baseline approaches based on ROUGE scores.

Decoder reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.