Hierarchical Learning for Generation with Long Source Sequences

15 Apr 2021  ·  Tobias Rohde, Xiaoxia Wu, Yinhan Liu ·

One of the challenges for current sequence to sequence (seq2seq) models is processing long sequences, such as those in summarization and document level machine translation tasks. These tasks require the model to reason at the token level as well as the sentence and paragraph level... We design and study a new Hierarchical Attention Transformer-based architecture (HAT) that outperforms standard Transformers on several sequence to sequence tasks. In particular, our model achieves stateof-the-art results on four summarization tasks, including ArXiv, CNN/DM, SAMSum, and AMI, and we push PubMed R1 & R2 SOTA further. Our model significantly outperforms our document-level machine translation baseline by 28 BLEU on the WMT19 EN-DE document translation task. We also investigate what the hierarchical layers learn by visualizing the hierarchical encoder-decoder attention. Finally, we study hierarchical learning on encoder-only pre-training and analyze its performance on classification downstream tasks. read more

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization AMI HAT-CNNDM ROUGE-1 52.27 # 1
ROUGE-2 20.15 # 1
ROUGE-L 50.57 # 1
Text Summarization arXiv HAT-BART ROUGE-1 46.74 # 1
ROUGE-2 19.19 # 1
ROUGE-L 42.2 # 1
Document Summarization CNN / Daily Mail HAT-BART ROUGE-1 44.48 # 1
ROUGE-2 21.31 # 2
ROUGE-L 41.52 # 1
Text Summarization Pubmed HAT-BART ROUGE-1 48.25 # 1
ROUGE-2 21.35 # 1
ROUGE-L 36.69 # 8
Reading Comprehension RACE HAT (Encoder) Accuracy 67.3 # 9
Text Summarization SAMSum Corpus HAT-CNNDM RL ROUGE-L 48.84 # 1
Text Summarization SAMSum Corpus HAT-CNNDM ROUGE-1 53.01 # 1
ROUGE-2 28.27 # 1
Document Level Machine Translation WMT2019 English-German Transformer (no-pretrain) BLEU 7.7 # 2
Document Level Machine Translation WMT2019 English-German HAT (no-pretrain) BLEU 35.7 # 1
Text Summarization X-Sum HAT-BART ROUGE-1 45.92 # 2
ROUGE-2 22.79 # 2

Methods


No methods listed for this paper. Add relevant methods here