A Convergence Result for Regularized Actor-Critic Methods

13 Jul 2019  ·  Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Ji Liu ·

In this paper, we present a probability one convergence proof, under suitable conditions, of a certain class of actor-critic algorithms for finding approximate solutions to entropy-regularized MDPs using the machinery of stochastic approximation. To obtain this overall result, we prove the convergence of policy evaluation with general regularizers when using linear approximation architectures and show convergence of entropy-regularized policy improvement.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here