TD3 builds on the DDPG algorithm for reinforcement learning, with a couple of modifications aimed at tackling overestimation bias with the value function. In particular, it utilises clipped double Q-learning, delayed update of target and policy networks, and target policy smoothing (which is similar to a SARSA based update; a safer update, as they provide higher value to actions resistant to perturbations).
Source: Addressing Function Approximation Error in Actor-Critic MethodsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 59 | 40.14% |
Continuous Control | 27 | 18.37% |
OpenAI Gym | 8 | 5.44% |
Decision Making | 7 | 4.76% |
Autonomous Driving | 5 | 3.40% |
Offline RL | 4 | 2.72% |
Meta-Learning | 3 | 2.04% |
Benchmarking | 3 | 2.04% |
D4RL | 2 | 1.36% |