CEDR: Contextualized Embeddings for Document Ranking

15 Apr 2019Sean MacAvaneyAndrew YatesArman CohanNazli Goharian

Although considerable attention has been given to neural ranking architectures recently, far less attention has been paid to the term representations that are used as input to these models. In this work, we investigate how two pretrained contextualized language models (ELMo and BERT) can be utilized for ad-hoc document ranking... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Compare
Ad-Hoc Information Retrieval TREC Robust04 CEDR-KNRM [email protected] 0.4667 # 1
Ad-Hoc Information Retrieval TREC Robust04 CEDR-KNRM [email protected] 0.5381 # 1
Ad-Hoc Information Retrieval TREC Robust04 Vanilla BERT [email protected] 0.4042 # 4
Ad-Hoc Information Retrieval TREC Robust04 Vanilla BERT [email protected] 0.4541 # 3