Passage Re-Ranking

17 papers with code • 2 benchmarks • 2 datasets

Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.

Multi-Granularity Guided Fusion-in-Decoder

eunseongc/mgfid 3 Apr 2024

In Open-domain Question Answering (ODQA), it is essential to discern relevant contexts as evidence and avoid spurious ones among retrieved results.

7
03 Apr 2024

Adapting Language Models to Compress Contexts

princeton-nlp/autocompressors 24 May 2023

Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents.

234
24 May 2023

Improving Conversational Passage Re-ranking with View Ensemble

cnclabs/codes.cs.sampling 26 Apr 2023

This paper presents ConvRerank, a conversational passage re-ranker that employs a newly developed pseudo-labeling approach.

1
26 Apr 2023

T2Ranking: A large-scale Chinese Benchmark for Passage Ranking

thuir/t2ranking 7 Apr 2023

T2Ranking comprises more than 300K queries and over 2M unique passages from real-world search engines.

138
07 Apr 2023

Few-shot Reranking for Multi-hop QA via Language Model Prompting

mukhal/promptrank 25 May 2022

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

23
25 May 2022

HLATR: Enhance Multi-stage Text Retrieval with Hybrid List Aware Transformer Reranking

Alibaba-NLP/HLATR 21 May 2022

Existing text retrieval systems with state-of-the-art performance usually adopt a retrieve-then-reranking architecture due to the high computational cost of pre-trained language models and the large corpus size.

18
21 May 2022

RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking

paddlepaddle/rocketqa EMNLP 2021

In this paper, we propose a novel joint training approach for dense passage retrieval and passage re-ranking.

745
14 Oct 2021

Dealing with Typos for BERT-based Passage Retrieval and Ranking

ielab/characterbert-dr EMNLP 2021

Our experimental results on the MS MARCO passage ranking dataset show that, with our proposed typos-aware training, DR and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.

14
27 Aug 2021

Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion

ielab/tilde 19 Aug 2021

BERT-based information retrieval models are expensive, in both time (query latency) and computational resources (energy, hardware cost), making many of these models impractical especially under resource constraints.

30
19 Aug 2021