CatVRNN: Generating Category Texts via Multi-task Learning

12 Jul 2021  ·  Pengsen Cheng, Jinqiao Dai, Jiayong Liu ·

Controlling the model to generate texts of different categories is a challenging task that is receiving increasing attention. Recently, generative adversarial networks (GANs) have shown promising results for category text generation. However, the texts generated by GANs usually suffer from problems of mode collapse and training instability. To avoid the above problems, in this study, inspired by multi-task learning, a novel model called category-aware variational recurrent neural network (CatVRNN) is proposed. In this model, generation and classification tasks are trained simultaneously to generate texts of different categories. The use of multi-task learning can improve the quality of the generated texts, when the classification task is appropriate. In addition, a function is proposed to initialize the hidden state of the CatVRNN to force the model to generate texts of a specific category. Experimental results on three datasets demonstrate that the model can outperform state-of-the-art text generation methods based on GAN in terms of diversity of generated texts.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here