Search Results for author: Kai Hui

Found 21 papers, 9 papers with code

Generate, Filter, and Fuse: Query Expansion via Multi-Step Keyword Generation for Zero-Shot Neural Rankers

no code implementations15 Nov 2023 Minghan Li, Honglei Zhuang, Kai Hui, Zhen Qin, Jimmy Lin, Rolf Jagerman, Xuanhui Wang, Michael Bendersky

We first show that directly applying the expansion techniques in the current literature to state-of-the-art neural rankers can result in deteriorated zero-shot performance.

Instruction Following Language Modelling +1

PaRaDe: Passage Ranking using Demonstrations with Large Language Models

no code implementations22 Oct 2023 Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, Mohit Iyyer, Andrew McCallum, Donald Metzler, Kai Hui

Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance.

Passage Ranking Passage Re-Ranking +6

Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels

no code implementations21 Oct 2023 Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, Michael Bendersky

We propose to incorporate fine-grained relevance labels into the prompt for LLM rankers, enabling them to better differentiate among documents with different levels of relevance to the query and thus derive a more accurate ranking.

Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting

no code implementations30 Jun 2023 Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky

Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and practical problem.

How Does Generative Retrieval Scale to Millions of Passages?

no code implementations19 May 2023 Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang, Jimmy Lin, Donald Metzler, Vinh Q. Tran

Popularized by the Differentiable Search Index, the emerging paradigm of generative retrieval re-frames the classic information retrieval problem into a sequence-to-sequence modeling task, forgoing external indices and encoding an entire document corpus within a single Transformer.

Information Retrieval Passage Ranking +1

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

no code implementations12 Oct 2022 Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.

Transformer Memory as a Differentiable Search Index

1 code implementation14 Feb 2022 Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model.

Information Retrieval Retrieval

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

3 code implementations ICLR 2022 Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training.

Denoising Multi-Task Learning

Transitivity, Time Consumption, and Quality of Preference Judgments in Crowdsourcing

no code implementations18 Apr 2021 Kai Hui, Klaus Berberich

In this work, we collect judgments from multiple judges using a crowdsourcing platform and aggregate them to compare the two kinds of preference judgments in terms of transitivity, time consumption, and quality.

Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context

no code implementations17 Apr 2021 Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye

BERT-based text ranking models have dramatically advanced the state-of-the-art in ad-hoc retrieval, wherein most models tend to consider individual query-document pairs independently.

Learning-To-Rank Re-Ranking +1

Simplified TinyBERT: Knowledge Distillation for Document Retrieval

4 code implementations16 Sep 2020 Xuanang Chen, Ben He, Kai Hui, Le Sun, Yingfei Sun

Despite the effectiveness of utilizing the BERT model for document ranking, the high computational cost of such approaches limits their uses.

Document Ranking Knowledge Distillation +1

NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval

1 code implementation EMNLP 2018 Canjia Li, Yingfei Sun, Ben He, Le Wang, Kai Hui, Andrew Yates, Le Sun, Jungang Xu

Pseudo-relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches.

Ad-Hoc Information Retrieval Information Retrieval +1

Content-Based Weak Supervision for Ad-Hoc Re-Ranking

1 code implementation1 Jul 2017 Sean MacAvaney, Andrew Yates, Kai Hui, Ophir Frieder

One challenge with neural ranking is the need for a large amount of manually-labeled relevance judgments for training.

Information Retrieval Re-Ranking

Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval

3 code implementations30 Jun 2017 Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo

Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals.

Ad-Hoc Information Retrieval Retrieval

DE-PACRR: Exploring Layers Inside the PACRR Model

no code implementations27 Jun 2017 Andrew Yates, Kai Hui

Recent neural IR models have demonstrated deep learning's utility in ad-hoc information retrieval.

Ad-Hoc Information Retrieval Information Retrieval +1

PACRR: A Position-Aware Neural IR Model for Relevance Matching

3 code implementations EMNLP 2017 Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo

In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query.

Ad-Hoc Information Retrieval Information Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.