1 code implementation • NAACL 2022 • Zhexin Zhang, Jiaxin Wen, Jian Guan, Minlie Huang
In this paper, we aim to control the protagonist’s persona in story generation, i. e., generating a story from a leading context and a persona description, where the protagonist should exhibit the specified personality through a coherent event sequence.
1 code implementation • 29 Nov 2023 • Jiaxin Wen, Pei Ke, Hao Sun, Zhexin Zhang, Chengfei Li, Jinfeng Bai, Minlie Huang
While recent studies primarily focus on probing toxic outputs that can be easily detected with existing toxicity classifiers, we show that LLMs can generate diverse implicit toxic outputs that are exceptionally difficult to detect via simply zero-shot prompting.
1 code implementation • 10 Jul 2023 • Zhexin Zhang, Jiaxin Wen, Minlie Huang
In this paper, we propose a method named Ethicist for targeted training data extraction through loss smoothed soft prompting and calibrated confidence estimation, investigating how to recover the suffix in the training data when given a prefix.
1 code implementation • 4 May 2023 • Jiaxin Wen, Hao Zhou, Jian Guan, Minlie Huang
However, the pre-trained dialogue model's ability to utilize long-range context is limited due to the scarcity of long-turn dialogue sessions.
1 code implementation • 29 Nov 2022 • Jiaxin Wen, Yeshuang Zhu, Jinchao Zhang, Jie zhou, Minlie Huang
Recent studies have shown the impressive efficacy of counterfactually augmented data (CAD) for reducing NLU models' reliance on spurious features and improving their generalizability.
no code implementations • 21 Sep 2022 • Sahand Sabour, Wen Zhang, Xiyao Xiao, Yuwei Zhang, Yinhe Zheng, Jiaxin Wen, Jialu Zhao, Minlie Huang
In this study, we analyze the effectiveness of Emohaa in reducing symptoms of mental distress.
1 code implementation • 22 Apr 2022 • Zhexin Zhang, Jiaxin Wen, Jian Guan, Minlie Huang
Endowing the protagonist with a specific personality is essential for writing an engaging story.
1 code implementation • 17 Mar 2022 • Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Lei Liu, Xiaoyan Zhu, Minlie Huang
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.
1 code implementation • 26 Feb 2022 • Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, Minlie Huang
Applying this approach, we construct AugESC, an augmented dataset for the ESC task, which largely extends the scale and topic coverage of the crowdsourced ESConv corpus.
2 code implementations • ACL 2021 • Jiexi Liu, Ryuichi Takanobu, Jiaxin Wen, Dazhen Wan, Hongguang Li, Weiran Nie, Cheng Li, Wei Peng, Minlie Huang
Most language understanding models in task-oriented dialog systems are trained on a small amount of annotated training data, and evaluated in a small set from the same distribution.