Learning a Discrete Set of Optimal Allocation Rules in a Queueing System with Unknown Service Rate
Motivated by the wide range of modern applications of the Erlang-B blocking model beyond communication networks and call centers to sizing and pricing in design production systems, messaging systems, and app-based parking systems, we study admission control for such a system but with unknown arrival and service rates. In our model, at every job arrival, a dispatcher decides to assign the job to an available server or block it. Every served job yields a fixed reward for the dispatcher, but it also results in a cost per unit time of service. Our goal is to design a dispatching policy that maximizes the long-term average reward for the dispatcher based on observing only the arrival times and the state of the system at each arrival that reflects a realistic sampling of such systems. Critically, the dispatcher observes neither the service times nor departure times so that standard reinforcement learning-based approaches that use reward signals do not apply. Hence, we develop our learning-based dispatch scheme as a parametric learning problem a'la self-tuning adaptive control. In our problem, certainty equivalent control switches between an always admit if room policy (explore infinitely often) and a never admit policy (immediately terminate learning), which is distinct from the adaptive control literature. Hence, our learning scheme judiciously uses the always admit if room policy so that learning doesn't stall. We prove that for all service rates, the proposed policy asymptotically learns to take the optimal action and present finite-time regret guarantees. The extreme contrast in the certainty equivalent optimal control policies leads to difficulties in learning that show up in our regret bounds for different parameter regimes: constant regret in one regime versus regret growing logarithmically in the other.
PDF Abstract