Reranking

213 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Reranking models and implementations
2 papers
4,578

Most implemented papers

Facebook FAIR's WMT19 News Translation Task Submission

huggingface/transformers WS 2019

This paper describes Facebook FAIR's submission to the WMT19 shared news translation task.

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval

microsoft/ANCE ICLR 2021

In this paper, we identify that the main bottleneck is in the training mechanisms, where the negative instances used in training are not representative of the irrelevant documents in testing.

MTEB: Massive Text Embedding Benchmark

embeddings-benchmark/mteb 13 Oct 2022

MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages.

The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models

castorini/rank_llm 14 Jan 2021

We propose a design pattern for tackling text ranking problems, dubbed "Expando-Mono-Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different domains.

Faster R-CNN Features for Instance Search

imatge-upc/retrieval-2016-deepvision 29 Apr 2016

This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN.

Pseudo-Relevance Feedback for Multiple Representation Dense Retrieval

terrierteam/pyterrier_colbert 21 Jun 2021

In particular, based on the pseudo-relevant set of documents identified using a first-pass dense retrieval, we extract representative feedback embeddings (using KMeans clustering) -- while ensuring that these embeddings discriminate among passages (based on IDF) -- which are then added to the query representation.

A Temporal Variational Model for Story Generation

dwlmt/knowledgeable-stories 14 Sep 2021

Recent language models can generate interesting and grammatically correct text in story generation but often lack plot development and long-term coherence.

CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization

makcedward/nlpaug EMNLP 2021

We study generating abstractive summaries that are faithful and factually consistent with the given articles.

RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models

castorini/rank_llm 26 Sep 2023

Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints.