Document Ranking
59 papers with code • 2 benchmarks • 6 datasets
Sort documents according to some criterion so that the "best" results appear early in the result list displayed to the user (Source: Wikipedia).
Libraries
Use these libraries to find Document Ranking models and implementationsMost implemented papers
XLNet: Generalized Autoregressive Pretraining for Language Understanding
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT
ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity.
CEDR: Contextualized Embeddings for Document Ranking
We call this joint approach CEDR (Contextualized Embeddings for Document Ranking).
Learning deep structured semantic models for web search using clickthrough data
The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data.
Context Attentive Document Ranking and Query Suggestion
We present a context-aware neural ranking model to exploit users' on-task search activities and enhance retrieval performance.
Neural Vector Spaces for Unsupervised Information Retrieval
We propose the Neural Vector Space Model (NVSM), a method that learns representations of documents in an unsupervised manner for news article retrieval.
Simplified TinyBERT: Knowledge Distillation for Document Retrieval
Despite the effectiveness of utilizing the BERT model for document ranking, the high computational cost of such approaches limits their uses.
The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models
We propose a design pattern for tackling text ranking problems, dubbed "Expando-Mono-Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different domains.
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair.
Multi-Stage Document Ranking with BERT
The advent of deep neural networks pre-trained via language modeling tasks has spurred a number of successful applications in natural language processing.