Search Results for author: Rolf Jagerman

Found 12 papers, 5 papers with code

Generate, Filter, and Fuse: Query Expansion via Multi-Step Keyword Generation for Zero-Shot Neural Rankers

no code implementations15 Nov 2023 Minghan Li, Honglei Zhuang, Kai Hui, Zhen Qin, Jimmy Lin, Rolf Jagerman, Xuanhui Wang, Michael Bendersky

We first show that directly applying the expansion techniques in the current literature to state-of-the-art neural rankers can result in deteriorated zero-shot performance.

Instruction Following Language Modelling +1

Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting

no code implementations30 Jun 2023 Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky

Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and practical problem.

Query Expansion by Prompting Large Language Models

no code implementations5 May 2023 Rolf Jagerman, Honglei Zhuang, Zhen Qin, Xuanhui Wang, Michael Bendersky

Query expansion is a widely used technique to improve the recall of search systems.

Regression Compatible Listwise Objectives for Calibrated Ranking with Binary Relevance

no code implementations2 Nov 2022 Aijun Bai, Rolf Jagerman, Zhen Qin, Le Yan, Pratyush Kar, Bing-Rong Lin, Xuanhui Wang, Michael Bendersky, Marc Najork

As Learning-to-Rank (LTR) approaches primarily seek to improve ranking quality, their output scores are not scale-calibrated by design.

Learning-To-Rank regression

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

no code implementations12 Oct 2022 Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.

Accelerated Convergence for Counterfactual Learning to Rank

1 code implementation21 May 2020 Rolf Jagerman, Maarten de Rijke

Counterfactual Learning to Rank (LTR) algorithms learn a ranking model from logged user interactions, often collected using a production system.

counterfactual Learning-To-Rank

Safe Exploration for Optimizing Contextual Bandits

1 code implementation2 Feb 2020 Rolf Jagerman, Ilya Markov, Maarten de Rijke

Our experiments using text classification and document retrieval confirm the above by comparing SEA (and a boundless variant called BSEA) to online and offline learning methods for contextual bandit problems.

counterfactual Information Retrieval +7

Unbiased Learning to Rank: Counterfactual and Online Approaches

no code implementations16 Jul 2019 Harrie Oosterhuis, Rolf Jagerman, Maarten de Rijke

Through randomization the effect of different types of bias can be removed from the learning process.

counterfactual Learning-To-Rank

To Model or to Intervene: A Comparison of Counterfactual and Online Learning to Rank from User Interactions

2 code implementations15 Jul 2019 Rolf Jagerman, Harrie Oosterhuis, Maarten de Rijke

At the moment, two methodologies for dealing with bias prevail in the field of LTR: counterfactual methods that learn from historical data and model user behavior to deal with biases; and online methods that perform interventions to deal with bias but use no explicit user models.

Benchmarking counterfactual +2

Computing Web-scale Topic Models using an Asynchronous Parameter Server

1 code implementation24 May 2016 Rolf Jagerman, Carsten Eickhoff, Maarten de Rijke

Topic models such as Latent Dirichlet Allocation (LDA) have been widely used in information retrieval for tasks ranging from smoothing and feedback methods to tools for exploratory search and discovery.

Information Retrieval Retrieval +1

A Directional Diffusion Algorithm for Inpainting

no code implementations11 Nov 2015 Jan Deriu, Rolf Jagerman, Kai-En Tsay

The problem of inpainting involves reconstructing the missing areas of an image.

Image Inpainting

Cannot find the paper you are looking for? You can Submit a new open access paper.