Sort documents according to some criterion so that the "best" results appear early in the result list displayed to the user (Source: Wikipedia).
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The Transformer-Kernel (TK) model has demonstrated strong reranking performance on the TREC Deep Learning benchmark -- and can be considered to be an efficient (but slightly less effective) alternative to other Transformer-based architectures that employ (i) large-scale pretraining (high training cost), (ii) joint encoding of query and document (high inference cost), and (iii) larger number of Transformer layers (both high training and high inference costs).
Recently introduced pre-trained contextualized autoregressive models like BERT have shown improvements in document retrieval tasks.
Traditional statistical retrieval models often treat each document as a whole.
When monoBERT is used as the cross-encoder teacher, together with either TwinBERT or ColBERT as the bi-encoder teacher, TRMD produces a student bi-encoder that performs better than the corresponding baseline bi-encoder.
We experiment with two hybrid models which first filter out the best podcasts based on user's query with a classical IR technique, and then perform re-ranking on the shortlisted documents based on the detailed description using a transformer-based model.
Leaderboards are a ubiquitous part of modern research in applied machine learning.
OpenMatch is a Python-based library that serves for Neural Information Retrieval (Neu-IR) research.
We propose a design pattern for tackling text ranking problems, dubbed "Expando-Mono-Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different domains.
This short document describes a traditional IR system that achieved MRR@100 equal to 0. 298 on the MS MARCO Document Ranking leaderboard (on 2020-12-06).
Ranking tasks are usually based on the text of the main body of the page and the actions (clicks) of users on the page.