Search Results for author: Kai Hui

Found 20 papers, 9 papers with code

Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting

no code implementations30 Jun 2023 Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky

On TREC-DL2019, PRP is only inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while outperforming other existing solutions, such as InstructGPT which has 175B parameters, by over 10% for nearly all ranking metrics.

RD-Suite: A Benchmark for Ranking Distillation

no code implementations7 Jun 2023 Zhen Qin, Rolf Jagerman, Rama Pasumarthi, Honglei Zhuang, He Zhang, Aijun Bai, Kai Hui, Le Yan, Xuanhui Wang

The distillation of ranking models has become an important topic in both academia and industry.

Benchmarking

How Does Generative Retrieval Scale to Millions of Passages?

no code implementations19 May 2023 Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang, Jimmy Lin, Donald Metzler, Vinh Q. Tran

Popularized by the Differentiable Search Index, the emerging paradigm of generative retrieval re-frames the classic information retrieval problem into a sequence-to-sequence modeling task, forgoing external indices and encoding an entire document corpus within a single Transformer.

Information Retrieval Passage Ranking +1

Learning List-Level Domain-Invariant Representations for Ranking

no code implementations21 Dec 2022 Ruicheng Xian, Honglei Zhuang, Zhen Qin, Hamed Zamani, Jing Lu, Ji Ma, Kai Hui, Han Zhao, Xuanhui Wang, Michael Bendersky

Domain adaptation aims to transfer the knowledge acquired by models trained on (data-rich) source domains to (low-resource) target domains, for which a popular method is invariant representation learning.

Representation Learning Unsupervised Domain Adaptation

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

no code implementations12 Oct 2022 Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.

Transformer Memory as a Differentiable Search Index

1 code implementation14 Feb 2022 Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model.

Information Retrieval Retrieval

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

3 code implementations ICLR 2022 Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training.

Denoising Multi-Task Learning

Transitivity, Time Consumption, and Quality of Preference Judgments in Crowdsourcing

no code implementations18 Apr 2021 Kai Hui, Klaus Berberich

In this work, we collect judgments from multiple judges using a crowdsourcing platform and aggregate them to compare the two kinds of preference judgments in terms of transitivity, time consumption, and quality.

Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context

no code implementations17 Apr 2021 Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye

BERT-based text ranking models have dramatically advanced the state-of-the-art in ad-hoc retrieval, wherein most models tend to consider individual query-document pairs independently.

Learning-To-Rank Re-Ranking +1

Simplified TinyBERT: Knowledge Distillation for Document Retrieval

3 code implementations16 Sep 2020 Xuanang Chen, Ben He, Kai Hui, Le Sun, Yingfei Sun

Despite the effectiveness of utilizing the BERT model for document ranking, the high computational cost of such approaches limits their uses.

Document Ranking Knowledge Distillation +1

NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval

1 code implementation EMNLP 2018 Canjia Li, Yingfei Sun, Ben He, Le Wang, Kai Hui, Andrew Yates, Le Sun, Jungang Xu

Pseudo-relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches.

Ad-Hoc Information Retrieval Information Retrieval +1

Content-Based Weak Supervision for Ad-Hoc Re-Ranking

1 code implementation1 Jul 2017 Sean MacAvaney, Andrew Yates, Kai Hui, Ophir Frieder

One challenge with neural ranking is the need for a large amount of manually-labeled relevance judgments for training.

Information Retrieval Re-Ranking

Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval

3 code implementations30 Jun 2017 Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo

Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals.

Ad-Hoc Information Retrieval Retrieval

DE-PACRR: Exploring Layers Inside the PACRR Model

no code implementations27 Jun 2017 Andrew Yates, Kai Hui

Recent neural IR models have demonstrated deep learning's utility in ad-hoc information retrieval.

Ad-Hoc Information Retrieval Information Retrieval +1

PACRR: A Position-Aware Neural IR Model for Relevance Matching

3 code implementations EMNLP 2017 Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo

In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query.

Ad-Hoc Information Retrieval Information Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.