Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting

13 Sep 2019Yi ZhangTao GeFuru WeiMing ZhouXu Sun

We study sequence-to-sequence (seq2seq) pre-training with data augmentation for sentence rewriting. Instead of training a seq2seq model with gold training data and augmented data simultaneously, we separate them to train in different phases: pre-training with the augmented data and fine-tuning with the gold data... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper