Passage Re-Ranking
17 papers with code • 2 benchmarks • 2 datasets
Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.
Latest papers with no code
PaRaDe: Passage Ranking using Demonstrations with Large Language Models
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance.
Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking
To leverage a reliable knowledge, we propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage.
Quality and Cost Trade-offs in Passage Re-ranking Task
Deep learning models named transformers achieved state-of-the-art results in a vast majority of NLP tasks at the cost of increased computational complexity and high memory consumption.
Towards Robust Passage Re-Ranking Model by Mitigating Lexical Match Bias
While deep learning models can overcome the limitations of traditional machine learning algorithms that use hand-crafted features, recent studies have shown that these models often achieve high dataset-specific accuracy by exploiting several bias without understanding deeper semantics of intended task.
Text-to-Text Multi-view Learning for Passage Re-ranking
Recently, much progress in natural language processing has been driven by deep contextualized representations pretrained on large corpora.
Multi-Perspective Semantic Information Retrieval in the Biomedical Domain
Information Retrieval (IR) is the task of obtaining pieces of data (such as documents) that are relevant to a particular query or need from a large repository of information.
Learning-to-Rank with BERT in TF-Ranking
This paper describes a machine learning algorithm for document (re)ranking, in which queries and documents are firstly encoded using BERT [1], and on top of that a learning-to-rank (LTR) model constructed with TF-Ranking (TFR) [2] is applied to further optimize the ranking performance.
A Study of BERT for Non-Factoid Question-Answering under Passage Length Constraints
We study the use of BERT for non-factoid question-answering, focusing on the passage re-ranking task under varying passage lengths.
Investigating the Successes and Failures of BERT for Passage Re-Ranking
The bidirectional encoder representations from transformers (BERT) model has recently advanced the state-of-the-art in passage re-ranking.
A Study on Passage Re-ranking in Embedding based Unsupervised Semantic Search
State of the art approaches for (embedding based) unsupervised semantic search exploits either compositional similarity (of a query and a passage) or pair-wise word (or term) similarity (from the query and the passage).