Search Results for author: Qianqian Tong

Found 7 papers, 2 papers with code

Escaping Saddle Points with Stochastically Controlled Stochastic Gradient Methods

no code implementations7 Mar 2021 Guannan Liang, Qianqian Tong, Chunjiang Zhu, Jinbo Bi

Stochastically controlled stochastic gradient (SCSG) methods have been proved to converge efficiently to first-order stationary points which, however, can be saddle points in nonconvex optimization.

Federated Nonconvex Sparse Learning

no code implementations31 Dec 2020 Qianqian Tong, Guannan Liang, Tan Zhu, Jinbo Bi

Nonconvex sparse learning plays an essential role in many areas, such as signal processing and deep network compression.

Edge-computing Sparse Learning

Effective Proximal Methods for Non-convex Non-smooth Regularized Learning

no code implementations14 Sep 2020 Guannan Liang, Qianqian Tong, Jiahao Ding, Miao Pan, Jinbo Bi

Sparse learning is a very important tool for mining useful information and patterns from high dimensional data.

Sparse Learning

Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data

no code implementations14 Sep 2020 Qianqian Tong, Guannan Liang, Jinbo Bi

Federated learning allows loads of edge computing devices to collaboratively learn a global model without data sharing.

Edge-computing Federated Learning

Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM

2 code implementations2 Aug 2019 Qianqian Tong, Guannan Liang, Jinbo Bi

Theoretically, we provide a new way to analyze the convergence of AGMs and prove that the convergence rate of \textsc{Adam} also depends on its hyper-parameter $\epsilon$, which has been overlooked previously.

Cannot find the paper you are looking for? You can Submit a new open access paper.