DOCmT5: Document-Level Pretraining of Multilingual Language Models

In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained with large scale parallel documents. While previous approaches have focused on leveraging sentence-level parallel data, we try to build a general-purpose pretrained model that can understand and generate long documents. We propose a simple and effective pretraining objective - Document reordering Machine Translation (DrMT), in which the input documents that are shuffled and masked need to be translated. DrMT brings consistent improvements over strong baselines on a variety of document-level generation tasks, including over 12 BLEU points for seen-language-pair document-level MT, over 7 BLEU points for unseen-language-pair document-level MT and over 3 ROUGE-1 points for seen-language-pair cross-lingual summarization. We achieve state-of-the-art (SOTA) on WMT20 De-En and IWSLT15 Zh-En document translation tasks. We also conduct extensive analysis on various factors for document pretraining, including (1) The effects of pretraining data quality and (2) The effects of combining mono-lingual and cross-lingual pretraining. We plan to make our model checkpoints publicly available.

PDF Abstract Findings (NAACL) 2022 PDF Findings (NAACL) 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Document Translation IWSLT2015 DOCmT5 BLEU 31.40 # 1
Document Summarization WikiLingua (tr->en) DOCmT5 Rouge-L 31.37 # 1
Document Translation WMT 2020 DOCmT5 BLEU 44.73 # 1

Methods


No methods listed for this paper. Add relevant methods here