PARADE: Passage Representation Aggregation for Document Reranking

20 Aug 2020  ·  Canjia Li, Andrew Yates, Sean MacAvaney, Ben He, Yingfei Sun ·

Pretrained transformer models, such as BERT and T5, have shown to be highly effective at ad-hoc passage and document ranking. Due to inherent sequence length limits of these models, they need to be run over a document's passages, rather than processing the entire document sequence at once. Although several approaches for aggregating passage-level signals have been proposed, there has yet to be an extensive comparison of these techniques. In this work, we explore strategies for aggregating relevance signals from a document's passages into a final ranking score. We find that passage representation aggregation techniques can significantly improve over techniques proposed in prior work, such as taking the maximum passage score. We call this new approach PARADE. In particular, PARADE can significantly improve results on collections with broad information needs where relevance signals can be spread throughout the document (such as TREC Robust04 and GOV2). Meanwhile, less complex aggregation techniques may work better on collections with an information need that can often be pinpointed to a single passage (such as TREC DL and TREC Genomics). We also conduct efficiency analyses, and highlight several strategies for improving transformer-based aggregation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Ad-Hoc Information Retrieval TREC Robust04 PARADE(ELECTRA) P@20 0.4604 # 3
nDCG@20 0.5399 # 2
Ad-Hoc Information Retrieval TREC Robust04 PARADE(BERT) P@20 0.4486 # 4
nDCG@20 0.5252 # 4

Methods