no code implementations • 20 Jul 2023 • Meixuan He, Yuqing Liang, Jinlan Liu, Dongpo Xu
Adam is a commonly used stochastic optimization algorithm in machine learning.
no code implementations • 9 May 2023 • Yiming Jiang, Jinlan Liu, Dongpo Xu, Danilo P. Mandic
Adam-type algorithms have become a preferred choice for optimisation in the deep learning setting, however, despite success, their convergence is still not well understood.
no code implementations • 30 May 2022 • Dongpo Xu, Jinlan Liu, Yinghua Lu, Jun Kong, Danilo Mandic
The stochastic momentum method is a commonly used acceleration technique for solving large-scale stochastic optimization problems in artificial neural networks.
1 code implementation • 12 Jun 2021 • Kun Zeng, Jinlan Liu, Zhixia Jiang, Dongpo Xu
The momentum stochastic gradient descent uses the accumulated gradient as the updated direction of the current parameters, which has a faster training speed.
2 code implementations • 12 Jun 2021 • Kun Zeng, Jinlan Liu, Zhixia Jiang, Dongpo Xu
Adaptive gradient algorithm (AdaGrad) and its variants, such as RMSProp, Adam, AMSGrad, etc, have been widely used in deep learning.