# Optimal algorithms for smooth and strongly convex distributed optimization in networks

Kevin ScamanFrancis BachSébastien BubeckYin Tat LeeLaurent Massoulié

In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp... (read more)

PDF Abstract

No code implementations yet. Submit your code now