Causal Estimation of Position Bias in Recommender Systems Using Marketplace Instruments

12 May 2022  ·  Rina Friedberg, Karthik Rajkumar, Jialiang Mao, Qian Yao, YinYin Yu, Min Liu ·

Information retrieval systems, such as online marketplaces, news feeds, and search engines, are ubiquitous in today's digital society. They facilitate information discovery by ranking retrieved items on predicted relevance, i.e. likelihood of interaction (click, share) between users and items. Typically modeled using past interactions, such rankings have a major drawback: interaction depends on the attention items receive. A highly-relevant item placed outside a user's attention could receive little interaction. This discrepancy between observed interaction and true relevance is termed the position bias. Position bias degrades relevance estimation and when it compounds over time, it can silo users into false relevant items, causing marketplace inefficiencies. Position bias may be identified with randomized experiments, but such an approach can be prohibitive in cost and feasibility. Past research has also suggested propensity score methods, which do not adequately address unobserved confounding; and regression discontinuity designs, which have poor external validity. In this work, we address these concerns by leveraging the abundance of A/B tests in ranking evaluations as instrumental variables. Historical A/B tests allow us to access exogenous variation in rankings without manually introducing them, harming user experience and platform revenue. We demonstrate our methodology in two distinct applications at LinkedIn - feed ads and the People-You-May-Know (PYMK) recommender. The marketplaces comprise users and campaigns on the ads side, and invite senders and recipients on PYMK. By leveraging prior experimentation, we obtain quasi-experimental variation in item rankings that is orthogonal to user relevance. Our method provides robust position effect estimates that handle unobserved confounding well, greater generalizability, and easily extends to other information retrieval systems.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here