Merging Deterministic Policy Gradient Estimations with Varied Bias-Variance Tradeoff for Effective Deep Reinforcement Learning

24 Nov 2019  ·  Gang Chen ·

Deep reinforcement learning (DRL) on Markov decision processes (MDPs) with continuous action spaces is often approached by directly training parametric policies along the direction of estimated policy gradients (PGs). Previous research revealed that the performance of these PG algorithms depends heavily on the bias-variance tradeoffs involved in estimating and using PGs. A notable approach towards balancing this tradeoff is to merge both on-policy and off-policy gradient estimations. However existing PG merging methods can be sample inefficient and are not suitable to train deterministic policies directly. To address these issues, this paper introduces elite PGs and strengthens their variance reduction effect by adopting elitism and policy consolidation techniques to regularize policy training based on policy behavioral knowledge extracted from elite trajectories. Meanwhile, we propose a two-step method to merge elite PGs and conventional PGs as a new extension of the conventional interpolation merging method. At both the theoretical and experimental levels, we show that both two-step merging and interpolation merging can induce varied bias-variance tradeoffs during policy training. They enable us to effectively use elite PGs and mitigate their performance impact on trained policies. Our experiments also show that two-step merging can outperform interpolation merging and several state-of-the-art algorithms on six benchmark control tasks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here