Q-Learning Networks

Double DQN

Introduced by Hasselt et al. in Deep Reinforcement Learning with Double Q-learning

A Double Deep Q-Network, or Double DQN utilises Double Q-learning to reduce overestimation by decomposing the max operation in the target into action selection and action evaluation. We evaluate the greedy policy according to the online network, but we use the target network to estimate its value. The update is the same as for DQN, but replacing the target $Y^{DQN}_{t}$ with:

$$ Y^{DoubleDQN}_{t} = R_{t+1}+\gamma{Q}\left(S_{t+1}, \arg\max_{a}Q\left(S_{t+1}, a; \theta_{t}\right);\theta_{t}^{-}\right) $$

Compared to the original formulation of Double Q-Learning, in Double DQN the weights of the second network $\theta^{'}_{t}$ are replaced with the weights of the target network $\theta_{t}^{-}$ for the evaluation of the current greedy policy.

Source: Deep Reinforcement Learning with Double Q-learning

Papers


Paper Code Results Date Stars

Categories