Search Results for author: Jiaxin Wen

Found 10 papers, 9 papers with code

Persona-Guided Planning for Controlling the Protagonist’s Persona in Story Generation

1 code implementation NAACL 2022 Zhexin Zhang, Jiaxin Wen, Jian Guan, Minlie Huang

In this paper, we aim to control the protagonist’s persona in story generation, i. e., generating a story from a leading context and a persona description, where the protagonist should exhibit the specified personality through a coherent event sequence.

Sentence Story Generation

Unveiling the Implicit Toxicity in Large Language Models

1 code implementation29 Nov 2023 Jiaxin Wen, Pei Ke, Hao Sun, Zhexin Zhang, Chengfei Li, Jinfeng Bai, Minlie Huang

While recent studies primarily focus on probing toxic outputs that can be easily detected with existing toxicity classifiers, we show that LLMs can generate diverse implicit toxic outputs that are exceptionally difficult to detect via simply zero-shot prompting.

Language Modelling Reinforcement Learning (RL)

Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation

1 code implementation10 Jul 2023 Zhexin Zhang, Jiaxin Wen, Minlie Huang

In this paper, we propose a method named Ethicist for targeted training data extraction through loss smoothed soft prompting and calibrated confidence estimation, investigating how to recover the suffix in the training data when given a prefix.


Re$^3$Dial: Retrieve, Reorganize and Rescale Dialogue Corpus for Long-Turn Open-Domain Dialogue Pre-training

1 code implementation4 May 2023 Jiaxin Wen, Hao Zhou, Jian Guan, Minlie Huang

However, the pre-trained dialogue model's ability to utilize long-range context is limited due to the scarcity of long-turn dialogue sessions.

AutoCAD: Automatically Generating Counterfactuals for Mitigating Shortcut Learning

1 code implementation29 Nov 2022 Jiaxin Wen, Yeshuang Zhu, Jinchao Zhang, Jie zhou, Minlie Huang

Recent studies have shown the impressive efficacy of counterfactually augmented data (CAD) for reducing NLU models' reliance on spurious features and improving their generalizability.

AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation

1 code implementation26 Feb 2022 Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, Minlie Huang

Applying this approach, we construct AugESC, an augmented dataset for the ESC task, which largely extends the scale and topic coverage of the crowdsourced ESConv corpus.

Data Augmentation Dialogue Generation +2

Robustness Testing of Language Understanding in Task-Oriented Dialog

2 code implementations ACL 2021 Jiexi Liu, Ryuichi Takanobu, Jiaxin Wen, Dazhen Wan, Hongguang Li, Weiran Nie, Cheng Li, Wei Peng, Minlie Huang

Most language understanding models in task-oriented dialog systems are trained on a small amount of annotated training data, and evaluated in a small set from the same distribution.

Data Augmentation Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.