Distributed Newton Optimization with Maximized Convergence Rate

17 Feb 2021  ·  Damián Marelli, Yong Xu, Minyue Fu, Zenghong Huang ·

The distributed optimization problem is set up in a collection of nodes interconnected via a communication network. The goal is to find the minimizer of a global objective function formed by the addition of partial functions locally known at each node. A number of methods are available for addressing this problem, having different advantages. The goal of this work is to achieve the maximum possible convergence rate. As the first step towards this end, we propose a new method which we show converges faster than other available options. As with most distributed optimization methods, convergence rate depends on a step size parameter. As the second step towards our goal we complement the proposed method with a fully distributed method for estimating the optimal step size that maximizes convergence speed. We provide theoretical guarantees for the convergence of the resulting method in a neighborhood of the solution. Also, for the case in which the global objective function has a single local minimum, we provide a different step size selection criterion together with theoretical guarantees for convergence. We present numerical experiments showing that, when using the same step size, our method converges significantly faster than its rivals. Experiments also show that the distributed step size estimation method achieves an asymptotic convergence rate very close to the theoretical maximum.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control