Zeroth-Order Actor-Critic

29 Jan 2022  ·  YuHeng Lei, Jianyu Chen, Shengbo Eben Li, Sifa Zheng ·

The recent advanced evolution-based zeroth-order optimization methods and the policy gradient-based first-order methods are two promising alternatives to solve reinforcement learning (RL) problems with complementary advantages. The former methods work with arbitrary policies, drive state-dependent and temporally-extended exploration, possess robustness-seeking property, but suffer from high sample complexity, while the latter methods are more sample efficient but are restricted to differentiable policies and the learned policies are less robust. To address these issues, we propose a novel Zeroth-Order Actor-Critic algorithm (ZOAC), which unifies these two methods into an on-policy actor-critic architecture to preserve the advantages from both. ZOAC conducts rollouts collection with timestep-wise perturbation in parameter space, first-order policy evaluation (PEV) and zeroth-order policy improvement (PIM) alternately in each iteration. We extensively evaluate our proposed method on a wide range of challenging continuous control benchmarks using different types of policies, where ZOAC outperforms zeroth-order and first-order baseline algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here