Off-Policy TD Control

Clipped Double Q-learning

Introduced by Fujimoto et al. in Addressing Function Approximation Error in Actor-Critic Methods

Clipped Double Q-learning is a variant on Double Q-learning that upper-bounds the less biased Q estimate $Q_{\theta_{2}}$ by the biased estimate $Q_{\theta_{1}}$. This is equivalent to taking the minimum of the two estimates, resulting in the following target update:

$$ y_{1} = r + \gamma\min_{i=1,2}Q_{\theta'_{i}}\left(s', \pi_{\phi_{1}}\left(s'\right)\right) $$

The motivation for this extension is that vanilla double Q-learning is sometimes ineffective if the target and current networks are too similar, e.g. with a slow-changing policy in an actor-critic framework.

Source: Addressing Function Approximation Error in Actor-Critic Methods

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Continuous Control 21 41.18%
OpenAI Gym 7 13.73%
Autonomous Driving 4 7.84%
Decision Making 4 7.84%
Meta-Learning 3 5.88%
Atari Games 2 3.92%
energy management 1 1.96%
Imitation Learning 1 1.96%
Feature Engineering 1 1.96%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories