DEAM: Adaptive Momentum with Discriminative Weight for Stochastic Optimization

25 Jul 2019Jiyang BaiYuxiang RenJiawei Zhang

Optimization algorithms with momentum, e.g., (ADAM), have been widely used for building deep learning models due to the faster convergence rates compared with stochastic gradient descent (SGD). Momentum helps accelerate SGD in the relevant directions in parameter updating, which can minify the oscillations of parameters update route... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper