Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs

9 Jun 2014  ·  Angelia Nedic, Alex Olshevsky ·

We investigate the convergence rate of the recently proposed subgradient-push method for distributed optimization over time-varying directed graphs. The subgradient-push method can be implemented in a distributed way without requiring knowledge of either the number of agents or the graph sequence; each node is only required to know its out-degree at each time. Our main result is a convergence rate of $O \left((\ln t)/t \right)$ for strongly convex functions with Lipschitz gradients even if only stochastic gradient samples are available; this is asymptotically faster than the $O \left((\ln t)/\sqrt{t} \right)$ rate previously known for (general) convex functions.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control Systems and Control