An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

30 Sep 2014  ·  Necdet Serhat Aybat, Garud Iyengar, Zi Wang ·

We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any $\epsilon>0$, an $\epsilon$-optimal and $\epsilon$-feasible solution can be computed within $\mathcal{O}(\log(\epsilon^{-1}))$ DFAL iterations, which require $\mathcal{O}(\frac{\psi_{\max}^{1.5}}{d_{\min}} \epsilon^{-1})$ proximal gradient computations and communications per node in total, where $\psi_{\max}$ denotes the largest eigenvalue of the graph Laplacian, and $d_{\min}$ is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control