Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings

Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Alien Rainbow+SEER Score 1172.6 # 36
Atari Games Atari 2600 Amidar Rainbow+SEER Score 250.5 # 33
Atari Games Atari 2600 Bank Heist Rainbow+SEER Score 276.6 # 39
Atari Games Atari 2600 Crazy Climber Rainbow+SEER Score 28066 # 43
Atari Games Atari 2600 Krull Rainbow+SEER Score 3277.5 # 45
Atari Games Atari 2600 Q*Bert Qbert Rainbow+SEER Score 4123.5 # 45
Atari Games Atari 2600 Road Runner Rainbow+SEER Score 11794 # 39
Atari Games Atari 2600 Seaquest Rainbow+SEER Score 561.2 # 52

Methods