CEDR: Contextualized Embeddings for Document Ranking

15 Apr 2019  ·  Sean MacAvaney, Andrew Yates, Arman Cohan, Nazli Goharian ·

Although considerable attention has been given to neural ranking architectures recently, far less attention has been paid to the term representations that are used as input to these models. In this work, we investigate how two pretrained contextualized language models (ELMo and BERT) can be utilized for ad-hoc document ranking. Through experiments on TREC benchmarks, we find that several existing neural ranking architectures can benefit from the additional context provided by contextualized language models. Furthermore, we propose a joint approach that incorporates BERT's classification vector into existing neural models and show that it outperforms state-of-the-art ad-hoc ranking baselines. We call this joint approach CEDR (Contextualized Embeddings for Document Ranking). We also address practical challenges in using these models for ranking, including the maximum input length imposed by BERT and runtime performance impacts of contextualized language models.

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Ad-Hoc Information Retrieval TREC Robust04 CEDR-KNRM P@20 0.4667 # 2
nDCG@20 0.5381 # 3
Ad-Hoc Information Retrieval TREC Robust04 Vanilla BERT P@20 0.4042 # 7
nDCG@20 0.4541 # 8