Adai: Separating the Effects of Adaptive Learning Rate and Momentum Inertia

29 Jun 2020Zeke XieXinrui WangHuishuai ZhangIssei SatoMasashi Sugiyama

Adaptive Momentum Estimation (Adam), which combines Adaptive Learning Rate and Momentum, is the most popular stochastic optimizer for accelerating training of deep neural networks. But Adam often generalizes significantly worse than Stochastic Gradient Descent (SGD)... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper