Adversarial Learning for Neural Dialogue Generation

EMNLP 2017 Jiwei LiWill MonroeTianlin ShiSébastien JeanAlan RitterDan Jurafsky

In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones... (read more)

PDF Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT LEADERBOARD
Dialogue Generation Amazon-5 mm 1 in 10 [email protected] 5 # 1