Hierarchical Learning for Generation with Long Source Sequences

15 Apr 2021  ยท  Tobias Rohde, Xiaoxia Wu, Yinhan Liu ยท

One of the challenges for current sequence to sequence (seq2seq) models is processing long sequences, such as those in summarization and document level machine translation tasks. These tasks require the model to reason at the token level as well as the sentence and paragraph level. We design and study a new Hierarchical Attention Transformer-based architecture (HAT) that outperforms standard Transformers on several sequence to sequence tasks. Furthermore, our model achieves state-of-the-art ROUGE scores on four summarization tasks, including PubMed, arXiv, CNN/DM, SAMSum, and AMI. Our model outperforms document-level machine translation baseline on the WMT20 English to German translation task. We investigate what the hierarchical layers learn by visualizing the hierarchical encoder-decoder attention. Finally, we study hierarchical learning on encoder-only pre-training and analyze its performance on classification tasks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Text Summarization AMI HAT-CNNDM ROUGE-1 52.27 # 1
ROUGE-2 20.15 # 1
ROUGE-L 50.57 # 1
Text Summarization Arxiv HEP-TH citation graph HAT-BART ROUGE-1 46.74 # 13
ROUGE-2 19.19 # 12
ROUGE-L 42.2 # 10
Document Summarization CNN / Daily Mail HAT-BART ROUGE-1 44.48 # 5
ROUGE-2 21.31 # 6
ROUGE-L 41.52 # 3
Text Summarization Pubmed HAT-BART ROUGE-1 48.25 # 8
ROUGE-2 21.35 # 7
ROUGE-L 36.69 # 18
Reading Comprehension RACE HAT (Encoder) Accuracy 67.3 # 10
Text Summarization SAMSum HAT-CNNDM RL ROUGE-L 48.84 # 3
Text Summarization SAMSum HAT-CNNDM ROUGE-1 53.01 # 6
ROUGE-2 28.27 # 6
Document Level Machine Translation WMT2019 English-German Transformer (no-pretrain) BLEU 7.7 # 2
Document Level Machine Translation WMT2019 English-German HAT (no-pretrain) BLEU 35.7 # 1
Text Summarization X-Sum HAT-BART ROUGE-1 45.92 # 6
ROUGE-2 22.79 # 7

Methods


No methods listed for this paper. Add relevant methods here