Off-Policy TD Control

# Double Q-learning

Introduced by Hasselt in Double Q-learning

Double Q-learning is an off-policy reinforcement learning algorithm that utilises double estimation to counteract overestimation problems with traditional Q-learning.

The max operator in standard Q-learning and DQN uses the same values both to select and to evaluate an action. This makes it more likely to select overestimated values, resulting in overoptimistic value estimates. To prevent this, we can decouple the selection from the evaluation, which is the idea behind Double Q-learning:

$$Y^{Q}_{t} = R_{t+1} + \gamma{Q}\left(S_{t+1}, \arg\max_{a}Q\left(S_{t+1}, a; \mathbb{\theta}_{t}\right);\mathbb{\theta}_{t}\right)$$

The Double Q-learning error can then be written as:

$$Y^{DoubleQ}_{t} = R_{t+1} + \gamma{Q}\left(S_{t+1}, \arg\max_{a}Q\left(S_{t+1}, a; \mathbb{\theta}_{t}\right);\mathbb{\theta}^{'}_{t}\right)$$

Here the selection of the action in the $\arg\max$ is still due to the online weights $\theta_{t}$. But we use a second set of weights $\mathbb{\theta}^{'}_{t}$ to fairly evaluate the value of this policy.

Source: Double Q-learning

#### Papers

Paper Code Results Date Stars

Atari Games 13 21.67%
Continuous Control 6 10.00%
OpenAI Gym 6 10.00%
Decision Making 6 10.00%
Multi-agent Reinforcement Learning 4 6.67%
Efficient Exploration 3 5.00%
Ensemble Learning 2 3.33%
Imitation Learning 2 3.33%
Starcraft 2 3.33%

#### Components

Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign