Search Results for author: Jinlan Liu

Found 5 papers, 2 papers with code

UAdam: Unified Adam-Type Algorithmic Framework for Non-Convex Stochastic Optimization

no code implementations9 May 2023 Yiming Jiang, Jinlan Liu, Dongpo Xu, Danilo P. Mandic

Adam-type algorithms have become a preferred choice for optimisation in the deep learning setting, however, despite success, their convergence is still not well understood.

Stochastic Optimization Vocal Bursts Type Prediction

Last-iterate convergence analysis of stochastic momentum methods for neural networks

no code implementations30 May 2022 Dongpo Xu, Jinlan Liu, Yinghua Lu, Jun Kong, Danilo Mandic

The stochastic momentum method is a commonly used acceleration technique for solving large-scale stochastic optimization problems in artificial neural networks.

Stochastic Optimization

Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent

1 code implementation12 Jun 2021 Kun Zeng, Jinlan Liu, Zhixia Jiang, Dongpo Xu

The momentum stochastic gradient descent uses the accumulated gradient as the updated direction of the current parameters, which has a faster training speed.

A decreasing scaling transition scheme from Adam to SGD

2 code implementations12 Jun 2021 Kun Zeng, Jinlan Liu, Zhixia Jiang, Dongpo Xu

Adaptive gradient algorithm (AdaGrad) and its variants, such as RMSProp, Adam, AMSGrad, etc, have been widely used in deep learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.