No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval

Recent work has shown that small distilled language models are strong competitors to models that are orders of magnitude larger and slower in a wide range of information retrieval tasks. This has made distilled and dense models, due to latency constraints, the go-to choice for deployment in real-world retrieval applications. In this work, we question this practice by showing that the number of parameters and early query-document interaction play a significant role in the generalization ability of retrieval models. Our experiments show that increasing model size results in marginal gains on in-domain test sets, but much larger gains in new domains never seen during fine-tuning. Furthermore, we show that rerankers largely outperform dense ones of similar size in several tasks. Our largest reranker reaches the state of the art in 12 of the 18 datasets of the Benchmark-IR (BEIR) and surpasses the previous state of the art by 3 average points. Finally, we confirm that in-domain effectiveness is not a good indicator of zero-shot effectiveness. Code is available at https://github.com/guilhermemr04/scaling-zero-shot-retrieval.git

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Argument Retrieval ArguAna (BEIR) monoT5-3B nDCG@10 0.322 # 3
Biomedical Information Retrieval BioASQ (BEIR) monoT5-3B nDCG@10 0.579 # 1
Fact Checking CLIMATE-FEVER (BEIR) monoT5-3B nDCG@10 0.280 # 2
Duplicate-Question Retrieval CQADupStack (BEIR) monoT5-3B nDCG@10 0.415 # 2
Entity Retrieval DBpedia (BEIR) monoT5-3B nDCG@10 0.477 # 1
Fact Checking FEVER (BEIR) monoT5-3B nDCG@10 0.849 # 1
Question Answering FiQA-2018 (BEIR) monoT5-3B nDCG@10 0.513 # 1
Question Answering HotpotQA (BEIR) monoT5-3B nDCG@10 0.759 # 1
Biomedical Information Retrieval NFCorpus (BEIR) monoT5-3B nDCG@10 0.383 # 1
Question Answering NQ (BEIR) monoT5-3B nDCG@10 0.633 # 2
Duplicate-Question Retrieval Quora (BEIR) monoT3-3B nDCG@10 0.843 # 2
Citation Prediction SciDocs (BEIR) monoT5-3B nDCG@10 0.197 # 1
Fact Checking SciFact (BEIR) monoT5-3B nDCG@10 0.777 # 1
Tweet Retrieval Signal-1M (RT) (BEIR) monoT5-3B nDCG@10 0.339 # 1
Argument Retrieval Tóuche-2020 (BEIR) monoT5-3B nDCG@10 0.325 # 1
Biomedical Information Retrieval TREC-COVID (BEIR) monoT5-3B nDCG@10 0.795 # 2
News Retrieval TREC-NEWS (BEIR) monoT5-3B nDCG@10 0.473 # 2

Methods


No methods listed for this paper. Add relevant methods here