Paper

Distillation Strategies for Proximal Policy Optimization

Vision-based deep reinforcement learning (RL) typically obtains performance benefit by using high capacity and relatively large convolutional neural networks (CNN). However, a large network leads to higher inference costs (power, latency, silicon area, MAC count). Many inference optimizations have been developed for CNNs. Some optimization techniques offer theoretical efficiency, such as sparsity, but designing actual hardware to support them is difficult. On the other hand, distillation is a simple general-purpose optimization technique which is broadly applicable for transferring knowledge from a trained, high capacity teacher network to an untrained, low capacity student network. DQN distillation extended the original distillation idea to transfer information stored in a high performance, high capacity teacher Q-function trained via the Deep Q-Learning (DQN) algorithm. Our work adapts the DQN distillation work to the actor-critic Proximal Policy Optimization algorithm. PPO is simple to implement and has much higher performance than the seminal DQN algorithm. We show that a distilled PPO student can attain far higher performance compared to a DQN teacher. We also show that a low capacity distilled student is generally able to outperform a low capacity agent that directly trains in the environment. Finally, we show that distillation, followed by "fine-tuning" in the environment, enables the distilled PPO student to achieve parity with teacher performance. In general, the lessons learned in this work should transfer to other modern actor-critic RL algorithms.

Results in Papers With Code
(↓ scroll down to see all results)