Search Results for author: Mike Bendersky

Found 6 papers, 1 papers with code

Learning to Rank when Grades Matter

no code implementations14 Jun 2023 Le Yan, Zhen Qin, Gil Shamir, Dong Lin, Xuanhui Wang, Mike Bendersky

In this paper, we conduct a rigorous study of learning to rank with grades, where both ranking performance and grade prediction performance are important.

Learning-To-Rank

Exploring the Viability of Synthetic Query Generation for Relevance Prediction

no code implementations19 May 2023 Aditi Chaudhary, Karthik Raman, Krishna Srinivasan, Kazuma Hashimoto, Mike Bendersky, Marc Najork

While our experiments demonstrate that these modifications help improve performance of QGen techniques, we also find that QGen approaches struggle to capture the full nuance of the relevance label space and as a result the generated queries are not faithful to the desired relevance label.

Information Retrieval Question Answering +2

QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation

no code implementations27 Oct 2022 Krishna Srinivasan, Karthik Raman, Anupam Samanta, Lingrui Liao, Luca Bertelli, Mike Bendersky

Thus, in this paper we make the following contributions: (1) We demonstrate that Retrieval Augmentation of queries provides LLMs with valuable additional context enabling improved understanding.

Feature Engineering Knowledge Distillation +1

Learning-to-Rank with BERT in TF-Ranking

no code implementations17 Apr 2020 Shuguang Han, Xuanhui Wang, Mike Bendersky, Marc Najork

This paper describes a machine learning algorithm for document (re)ranking, in which queries and documents are firstly encoded using BERT [1], and on top of that a learning-to-rank (LTR) model constructed with TF-Ranking (TFR) [2] is applied to further optimize the ranking performance.

Document Ranking Learning-To-Rank +2

Cannot find the paper you are looking for? You can Submit a new open access paper.