Action-Gap Phenomenon in Reinforcement Learning

NeurIPS 2011  ·  Amir-Massoud Farahmand ·

Many practitioners of reinforcement learning problems have observed that oftentimes the performance of the agent reaches very close to the optimal performance even though the estimated (action-)value function is still far from the optimal one. The goal of this paper is to explain and formalize this phenomenon by introducing the concept of the action-gap regularity. As a typical result, we prove that for an agent following the greedy policy \(\hat{\pi}\) with respect to an action-value function \(\hat{Q}\), the performance loss \(E[V^*(X) - V^{\hat{X}} (X)]\) is upper bounded by \(O(|| \hat{Q} - Q^*||_\infty^{1+\zeta}\)), in which \(\zeta >= 0\) is the parameter quantifying the action-gap regularity. For \(\zeta > 0\), our results indicate smaller performance loss compared to what previous analyses had suggested. Finally, we show how this regularity affects the performance of the family of approximate value iteration algorithms.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here