Sort documents according to some criterion so that the "best" results appear early in the result list displayed to the user (Source: Wikipedia).
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.
This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair.
Given a query and a set of documents, K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score.
ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity.
We call this joint approach CEDR (Contextualized Embeddings for Document Ranking).
Ranked #3 on Ad-Hoc Information Retrieval on TREC Robust04
We study the utility of the lexical translation model (IBM Model 1) for English text retrieval, in particular, its neural variants that are trained end-to-end.
We present a context-aware neural ranking model to exploit users' on-task search activities and enhance retrieval performance.
We propose a multi-task learning framework to jointly learn document ranking and query suggestion for web search.
Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space.