Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.

PDF Abstract Findings (NAACL) 2022 PDF Findings (NAACL) 2022 Abstract

Results from the Paper


Ranked #2 on Passage Retrieval on Natural Questions (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Passage Retrieval Natural Questions DPR-PAQ Precision@20 84.68 # 3
Precision@100 89.22 # 2

Methods


No methods listed for this paper. Add relevant methods here