Paragraph-based Transformer Pre-training for Multi-Sentence Inference

Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling dependencies across multiple candidate sentences jointly. In this paper, we first show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks. We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences. Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks. Our code and pre-trained models are released at https://github.com/amazon-research/wqa-multi-sentence-inference .

PDF Abstract NAACL 2022 PDF NAACL 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Answer Selection ASNQ RoBERTa-Base Joint MSPP MAP 0.673 # 3
MRR 0.737 # 3
Fact Verification FEVER RoBERTa-Base Joint MSPP Flexible Accuracy 75.36 # 3
Fact Verification FEVER RoBERTa-Base Joint MSPP Accuracy 74.39 # 4
Question Answering TrecQA RoBERTa-Base Joint + MSPP MAP 0.911 # 6
MRR 0.952 # 4
Question Answering WikiQA RoBERTa-Base Joint MSPP MAP 0.887 # 6
MRR 0.900 # 6

Methods


No methods listed for this paper. Add relevant methods here