Passage Re-Ranking
18 papers with code • 2 benchmarks • 2 datasets
Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.
Most implemented papers
Passage Re-ranking with BERT
Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference.
Document Expansion by Query Prediction
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, this might comprise questions the document can potentially answer.
Dealing with Typos for BERT-based Passage Retrieval and Ranking
Our experimental results on the MS MARCO passage ranking dataset show that, with our proposed typos-aware training, DR and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.
Few-shot Reranking for Multi-hop QA via Language Model Prompting
To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.
An Updated Duet Model for Passage Re-ranking
We propose several small modifications to Duet---a deep neural ranking model---and evaluate the updated model on the MS MARCO passage ranking task.
Mitigating the Position Bias of Transformer Models in Passage Re-Ranking
In this work we analyze position bias on datasets, the contextualized representations, and their effect on retrieval results.
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers
In this work, we first provide a novel framework to measure the fairness in the retrieved text contents of ranking models.
Exploiting Sentence-Level Representations for Passage Ranking
Recently, pre-trained contextual models, such as BERT, have shown to perform well in language related tasks.
A Modern Perspective on Query Likelihood with Deep Generative Retrieval Models
In contrast to the matching paradigm, the probabilistic nature of generative rankers readily offers a fine-grained measure of uncertainty.
Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion
BERT-based information retrieval models are expensive, in both time (query latency) and computational resources (energy, hardware cost), making many of these models impractical especially under resource constraints.