ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

6 Sep 2018  ·  Yueh-Hua Wu, Fan-Yun Sun, Yen-Yu Chang, Shou-De Lin ·

This work provides a thorough study on how reward scaling can affect performance of deep reinforcement learning agents. In particular, we would like to answer the question that how does reward scaling affect non-saturating ReLU networks in RL? This question matters because ReLU is one of the most effective activation functions for deep learning models. We also propose an Adaptive Network Scaling framework to find a suitable scale of the rewards during learning for better performance. We conducted empirical studies to justify the solution.

PDF Abstract


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.