PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation

14 Apr 2020  ·  Bin Bi, Chenliang Li, Chen Wu, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, Luo Si ·

Self-supervised pre-training, such as BERT, MASS and BART, has emerged as a powerful technique for natural language understanding and generation. Existing pre-training techniques employ autoencoding and/or autoregressive objectives to train Transformer-based models by recovering original word tokens from corrupted text with some masked tokens. The training goals of existing techniques are often inconsistent with the goals of many language generation tasks, such as generative question answering and conversational response generation, for producing new text given context. This work presents PALM with a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus, specifically designed for generating new text conditioned on context. The new scheme alleviates the mismatch introduced by the existing denoising scheme between pre-training and fine-tuning where generation is more than reconstructing original text. An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Abstractive Text Summarization CNN / Daily Mail PALM ROUGE-1 44.30 # 10
ROUGE-2 21.12 # 16
ROUGE-L 41.41 # 4
Text Generation CNN/Daily Mail PALM ROUGE-L 41.41 # 1
Text Summarization GigaWord PALM ROUGE-1 39.45 # 8
ROUGE-2 20.37 # 8
ROUGE-L 36.75 # 4