Adapting Pretrained Text-to-Text Models for Long Text Sequences

21 Sep 2022  ยท  Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, Wen-tau Yih ยท

We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -- model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying length. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes. Our code has been released at https://github.com/facebookresearch/bart_ls.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text Summarization Arxiv HEP-TH citation graph BART-LS ROUGE-1 50.2 # 2
Text Summarization BookSum BART-LS ROUGE 38.5 # 1
Text Summarization GovReport BART-LS ROUGE-1 62.0 # 2
Text Summarization Pubmed BART-LS ROUGE-1 50.3 # 2
Text Summarization QMSum BART-LS ROUGE-1 37.9 # 1
Long-range modeling SCROLLS BART-LS GovRep 59.4 / 29.8 / 30.8 # 3
SumScr 37.7 / 10.2 / 21.5 # 2
QMSum 35.1 / 11.0 / 22.0 # 1
Qspr 48.7 # 4
Nrtv 26.2 # 4
QALT EM-T/H 37.8 / 34.0 # 5
CNLI 87.1 # 6
Avg. 39.76 # 4

Methods


No methods listed for this paper. Add relevant methods here