Stream-based Online Active Learning in a Contextual Multi-Armed Bandit Framework

11 Jul 2016  ·  Linqi Song ·

We study the stream-based online active learning in a contextual multi-armed bandit framework. In this framework, the reward depends on both the arm and the context. In a stream-based active learning setting, obtaining the ground truth of the reward is costly, and the conventional contextual multi-armed bandit algorithm fails to achieve a sublinear regret due to this cost. Hence, the algorithm needs to determine whether or not to request the ground truth of the reward at current time slot. In our framework, we consider a stream-based active learning setting in which a query request for the ground truth is sent to the annotator, together with some prior information of the ground truth. Depending on the accuracy of the prior information, the query cost varies. Our algorithm mainly carries out two operations: the refinement of the context and arm spaces and the selection of actions. In our algorithm, the partitions of the context space and the arm space are maintained for a certain time slots, and then become finer as more information about the rewards accumulates. We use a strategic way to select the arms and to request the ground truth of the reward, aiming to maximize the total reward. We analytically show that the regret is sublinear and in the same order with that of the conventional contextual multi-armed bandit algorithms, where no query cost

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here