Long Text Generation via Adversarial Training with Leaked Information

24 Sep 2017Jiaxian Guo • Sidi Lu • Han Cai • Weinan Zhang • Yong Yu • Jun Wang

Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets (GAN) that use a discriminative model to guide the training of the generative model as a reinforcement learning policy has shown promising results in text generation. However, the scalar guiding signal is only available after the entire text has been generated and lacks intermediate information about text structure during the generative process.

PDF Abstract

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Text Generation Chinese Poems LeakGAN BLEU-2 0.881 # 1
Text Generation COCO Captions LeakGAN BLEU-2 0.950 # 1
Text Generation COCO Captions LeakGAN BLEU-3 0.880 # 1
Text Generation COCO Captions LeakGAN BLEU-4 0.778 # 1
Text Generation COCO Captions LeakGAN BLEU-5 0.686 # 1
Text Generation EMNLP2017 WMT LeakGAN BLEU-2 0.956 # 1
Text Generation EMNLP2017 WMT LeakGAN BLEU-3 0.819 # 1
Text Generation EMNLP2017 WMT LeakGAN BLEU-4 0.627 # 1
Text Generation EMNLP2017 WMT LeakGAN BLEU-5 0.498 # 1