An Optimal Policy for Dynamic Assortment Planning Under Uncapacitated Multinomial Logit Models

NeurIPS 2018  ·  Xi Chen, Yining Wang, Yuan Zhou ·

We study the dynamic assortment planning problem, where for each arriving customer, the seller offers an assortment of substitutable products and customer makes the purchase among offered products according to an uncapacitated multinomial logit (MNL) model. Since all the utility parameters of MNL are unknown, the seller needs to simultaneously learn customers' choice behavior and make dynamic decisions on assortments based on the current knowledge. The goal of the seller is to maximize the expected revenue, or equivalently, to minimize the expected regret. Although dynamic assortment planning problem has received an increasing attention in revenue management, most existing policies require the estimation of mean utility for each product and the final regret usually involves the number of products $N$. The optimal regret of the dynamic assortment planning problem under the most basic and popular choice model---MNL model is still open. By carefully analyzing a revenue potential function, we develop a trisection based policy combined with adaptive confidence bound construction, which achieves an {item-independent} regret bound of $O(\sqrt{T})$, where $T$ is the length of selling horizon. We further establish the matching lower bound result to show the optimality of our policy. There are two major advantages of the proposed policy. First, the regret of all our policies has no dependence on $N$. Second, our policies are almost assumption free: there is no assumption on mean utility nor any "separability" condition on the expected revenues for different assortments. Our result also extends the unimodal bandit literature.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here