Paper

Prioritized Sequence Experience Replay

Experience replay is widely used in deep reinforcement learning algorithms and allows agents to remember and learn from experiences from the past. In an effort to learn more efficiently, researchers proposed prioritized experience replay (PER) which samples important transitions more frequently. In this paper, we propose Prioritized Sequence Experience Replay (PSER) a framework for prioritizing sequences of experience in an attempt to both learn more efficiently and to obtain better performance. We compare the performance of PER and PSER sampling techniques in a tabular Q-learning environment and in DQN on the Atari 2600 benchmark. We prove theoretically that PSER is guaranteed to converge faster than PER and empirically show PSER substantially improves upon PER.

Results in Papers With Code
(↓ scroll down to see all results)