Primal-dual methods for large-scale and distributed convex optimization and data analytics

18 Dec 2019  ·  Dusan Jakovetic, Dragana Bajovic, Joao Xavier, Jose M. F. Moura ·

The augmented Lagrangian method (ALM) is a classical optimization tool that solves a given "difficult" (constrained) problem via finding solutions of a sequence of "easier"(often unconstrained) sub-problems with respect to the original (primal) variable, wherein constraints satisfaction is controlled via the so-called dual variables. ALM is highly flexible with respect to how primal sub-problems can be solved, giving rise to a plethora of different primal-dual methods. The powerful ALM mechanism has recently proved to be very successful in various large scale and distributed applications. In addition, several significant advances have appeared, primarily on precise complexity results with respect to computational and communication costs in the presence of inexact updates and design and analysis of novel optimal methods for distributed consensus optimization. We provide a tutorial-style introduction to ALM and its variants for solving convex optimization problems in large scale and distributed settings. We describe control-theoretic tools for the algorithms' analysis and design, survey recent results, and provide novel insights in the context of two emerging applications: federated learning and distributed energy trading.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control Information Theory Information Theory