Passage Ranking

29 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

RepBERT: Contextualized Text Embeddings for First-Stage Retrieval

jingtaozhan/RepBERT-Index 28 Jun 2020

Although exact term match between queries and documents is the dominant method to perform first-stage retrieval, we propose a different approach, called RepBERT, to represent documents and queries with fixed-length contextualized embeddings.

Pseudo-Relevance Feedback for Multiple Representation Dense Retrieval

terrierteam/pyterrier_colbert 21 Jun 2021

In particular, based on the pseudo-relevant set of documents identified using a first-pass dense retrieval, we extract representative feedback embeddings (using KMeans clustering) -- while ensuring that these embeddings discriminate among passages (based on IDF) -- which are then added to the query representation.

Document Ranking with a Pretrained Sequence-to-Sequence Model

castorini/pygaggle Findings of the Association for Computational Linguistics 2020

We investigate this observation further by varying target words to probe the model's use of latent knowledge.

Dealing with Typos for BERT-based Passage Retrieval and Ranking

ielab/typos-aware-bert EMNLP 2021

Our experimental results on the MS MARCO passage ranking dataset show that, with our proposed typos-aware training, DR and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.

Learning to Rank in Generative Retrieval

liyongqi67/ltrgr 27 Jun 2023

However, only learning to generate is insufficient for generative retrieval.

An Updated Duet Model for Passage Re-ranking

dfcf93/MSMARCO 18 Mar 2019

We propose several small modifications to Duet---a deep neural ranking model---and evaluate the updated model on the MS MARCO passage ranking task.

CROWN: Conversational Passage Ranking by Reasoning over Word Networks

magkai/CROWN 7 Nov 2019

Information needs around a topic cannot be satisfied in a single turn; users typically ask follow-up questions referring to the same theme and a system must be capable of understanding the conversational context of a request to retrieve correct answers.

TU Wien @ TREC Deep Learning '19 -- Simple Contextualization for Re-ranking

thunlp/ReInfoSelect 3 Dec 2019

The usage of neural network models puts multiple objectives in conflict with each other: Ideally we would like to create a neural model that is effective, efficient, and interpretable at the same time.

Learning Contextualized Document Representations for Healthcare Answer Retrieval

sebastianarnold/cdv 3 Feb 2020

Our model leverages a dual encoder architecture with hierarchical LSTM layers and multi-task training to encode the position of clinical entities and aspects alongside the document discourse.

Conversational Question Answering over Passages by Leveraging Word Proximity Networks

magkai/CROWN 27 Apr 2020

In this work, we demonstrate CROWN (Conversational passage ranking by Reasoning Over Word Networks): an unsupervised yet effective system for conversational QA with passage responses, that supports several modes of context propagation over multiple turns.