Sort documents according to some criterion so that the "best" results appear early in the result list displayed to the user (Source: Wikipedia).
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We study the utility of the lexical translation model (IBM Model 1) for English text retrieval, in particular, its neural variants that are trained end-to-end.
Our experimental results on the ad-hoc retrieval task of conversation response ranking reveal that (i) BERT-based rankers are not robustly calibrated and that stochastic BERT-based rankers yield better calibration; and (ii) uncertainty estimation is beneficial for both risk-aware neural ranking, i. e. taking into account the uncertainty when ranking documents, and for predicting unanswerable conversational contexts.
In this paper, we design Query-Directed Sparse attention that induces IR-axiomatic structures in transformer self-attention.
Despite the effectiveness of utilizing the BERT model for document ranking, the high computational cost of such approaches limits their uses.
We extend the ranked retrieval annotations of the Deep Learning track of TREC 2019 with passage and word level graded relevance annotations for all relevant documents.
Deep matching models aim to facilitate search engines retrieving more relevant documents by mapping queries and documents into semantic vectors in the first-stage retrieval.
In this work, we propose a local self-attention which considers a moving window over the document terms and for each term attends only to other terms in the same window.
ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity.