no code implementations • NAACL (SocialNLP) 2021 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Discrepancies exist among different cultures or languages.
no code implementations • 4 Oct 2024 • Yufei Tian, Zeyu Pan, Nanyun Peng
The increasing capability of large language models (LLMs) to generate fluent long-form texts is presenting new challenges in distinguishing machine-generated outputs from human-written ones, which is crucial for ensuring authenticity and trustworthiness of expressions.
no code implementations • 30 Aug 2024 • Songyan Zhao, Bingxuan Li, Yufei Tian, Nanyun Peng
Automatic melody-to-lyric generation aims to produce lyrics that align with a given melody.
1 code implementation • 18 Jul 2024 • Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
1 code implementation • 16 Nov 2023 • Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman
We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting.
no code implementations • 25 Oct 2023 • Yufei Tian, Felix Zhang, Nanyun Peng
Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text.
1 code implementation • 23 Oct 2023 • Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma
While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks.
1 code implementation • 30 May 2023 • Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, YiWen Chen, Tagyoung Chung, Jing Huang, Nanyun Peng
Automatic melody-to-lyric generation is a task in which song lyrics are generated to go with a given melody.
no code implementations • 12 May 2023 • Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, Nanyun Peng
At inference time, we leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
1 code implementation • 24 Oct 2022 • Yufei Tian, Divyanshu Sheth, Nanyun Peng
We propose a unified framework to generate both homophonic and homographic puns to resolve the split-up in existing works.
1 code implementation • NAACL 2022 • Anirudh Mittal, Yufei Tian, Nanyun Peng
In this paper, we propose a simple yet effective way to generate pun sentences that does not require any training on existing puns.
1 code implementation • NAACL 2022 • Rujun Han, Hong Chen, Yufei Tian, Nanyun Peng
Stories or narratives are comprised of a sequence of events.
1 code implementation • NAACL 2022 • Yufei Tian, Nanyun Peng
Poetry generation, and creative language generation in general, usually suffers from the lack of large training data.
1 code implementation • Findings (EMNLP) 2021 • Yufei Tian, Arvind Krishna Sridhar, Nanyun Peng
A hyperbole is an intentional and creative exaggeration not to be taken literally.
no code implementations • COLING 2022 • Xiaofei Sun, Yufei Tian, Yuxian Meng, Nanyun Peng, Fei Wu, Jiwei Li, Chun Fan
Then based on the paraphrase pairs produced by these UMT models, a unified surrogate model can be trained to serve as the final \sts model to generate paraphrases, which can be directly used for test in the unsupervised setup, or be finetuned on labeled datasets in the supervised setup.
no code implementations • 13 Apr 2020 • Yufei Tian, Jianfei Yu, Jing Jiang
In this paper, we study abstractive review summarization. Observing that review summaries often consist of aspect words, opinion words and context words, we propose a two-stage reinforcement learning approach, which first predicts the output word type from the three types, and then leverages the predicted word type to generate the final word distribution. Experimental results on two Amazon product review datasets demonstrate that our method can consistently outperform several strong baseline approaches based on ROUGE scores.
1 code implementation • 10 Apr 2020 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Perspective differences exist among different cultures or languages.