Paper

Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening

We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.

Results in Papers With Code
(↓ scroll down to see all results)