Non-Stationary First-Order Primal-Dual Algorithms with Fast NonErgodic Convergence Rates

13 Mar 2019  ·  Quoc Tran-Dinh, Yuzixuan Zhu ·

In this paper, we propose two non-stationary first-order primal-dual algorithms to solve nonsmooth composite convex optimization problems. Our first algorithm aims at solving nonsmooth and nonstrongly convex problems, which is different from existing methods at which we have different intermediate steps, update all parameters adaptively, and characterize three different convergence criteria. We prove that our method can achieve an $\mathcal{O}(\frac{1}{k})$-convergence rate on the primal-dual gap, primal and dual objective residuals, where $k$ is the iteration counter. Our rate is on the last iterate sequence of the primal problem and on the averaging sequence of the dual problem, which we call semi-nonergodic rate. By adapting the parameters, we can obtain up to $o(\frac{1}{k})$ convergence rate on the primal objective residuals in nonergodic sense. To the best of our knowledge, this is the first primal-dual algorithm achieving such fast nonergodic convergence rate guarantees. If the problem is strongly convex, then we obtain a new variant with $\mathcal{O}(\frac{1}{k^2})$ convergence rate on the same three types of guarantee. Using adaptive update rules for parameters, we can also achieve up to $o(\frac{1}{k^2})$-nonergodic rate on the primal objective residuals. Then, we extend our algorithms to the sum of three objective functions, and specify them to solve general constrained convex optimization problems. As byproducts, we obtain the same best-known convergence rates on the last iterate sequence for both nonstrongly convex and semi-strongly convex cases. To increase the performance of our basic algorithms, we also propose simple restarting variants. We verify our theoretical development via two well-known numerical examples and compare them with the state-of-the-arts.

PDF Abstract

Categories


Optimization and Control 90C25, 90-08