Policy Learning with Adaptively Collected Data

5 May 2021  ·  Ruohan Zhan, Zhimei Ren, Susan Athey, Zhengyuan Zhou ·

Learning optimal policies from historical data enables personalization in a wide variety of applications including healthcare, digital recommendations, and online education. The growing policy learning literature focuses on settings where the data collection rule stays fixed throughout the experiment. However, adaptive data collection is becoming more common in practice, from two primary sources: 1) data collected from adaptive experiments that are designed to improve inferential efficiency; 2) data collected from production systems that progressively evolve an operational policy to improve performance over time (e.g. contextual bandits). Yet adaptivity complicates the optimal policy identification ex post, since samples are dependent, and each treatment may not receive enough observations for each type of individual. In this paper, we make initial research inquiries into addressing the challenges of learning the optimal policy with adaptively collected data. We propose an algorithm based on generalized augmented inverse propensity weighted (AIPW) estimators, which non-uniformly reweight the elements of a standard AIPW estimator to control worst-case estimation variance. We establish a finite-sample regret upper bound for our algorithm and complement it with a regret lower bound that quantifies the fundamental difficulty of policy learning with adaptive data. When equipped with the best weighting scheme, our algorithm achieves minimax rate optimal regret guarantees even with diminishing exploration. Finally, we demonstrate our algorithm's effectiveness using both synthetic data and public benchmark datasets.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here