Passage Re-Ranking

17 papers with code • 2 benchmarks • 2 datasets

Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.

Most implemented papers

Passage Re-ranking with BERT

nyu-dl/dl4marco-bert 13 Jan 2019

Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference.

Document Expansion by Query Prediction

nyu-dl/dl4ir-doc2query 17 Apr 2019

One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, this might comprise questions the document can potentially answer.

Dealing with Typos for BERT-based Passage Retrieval and Ranking

ielab/typos-aware-bert EMNLP 2021

Our experimental results on the MS MARCO passage ranking dataset show that, with our proposed typos-aware training, DR and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.

Few-shot Reranking for Multi-hop QA via Language Model Prompting

mukhal/promptrank 25 May 2022

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

An Updated Duet Model for Passage Re-ranking

dfcf93/MSMARCO 18 Mar 2019

We propose several small modifications to Duet---a deep neural ranking model---and evaluate the updated model on the MS MARCO passage ranking task.

Mitigating the Position Bias of Transformer Models in Passage Re-Ranking

sebastian-hofstaetter/transformer-kernel-ranking 18 Jan 2021

In this work we analyze position bias on datasets, the contextualized representations, and their effect on retrieval results.

Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers

CPJKU/information_retrieval_fairness_debiasing 28 Apr 2021

In this work, we first provide a novel framework to measure the fairness in the retrieved text contents of ranking models.

Exploiting Sentence-Level Representations for Passage Ranking

mrjleo/ranking-models 14 Jun 2021

Recently, pre-trained contextual models, such as BERT, have shown to perform well in language related tasks.

A Modern Perspective on Query Likelihood with Deep Generative Retrieval Models

CPJKU/DeepGenIR 25 Jun 2021

In contrast to the matching paradigm, the probabilistic nature of generative rankers readily offers a fine-grained measure of uncertainty.