Distributed Zeroth-Order Optimization: Convergence Rates That Match Centralized Counterpart

29 Sep 2021  ·  Deming Yuan, Lei Wang, Alexandre Proutiere, Guodong Shi ·

Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed with gradients replaced by the estimates. For optimization of large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed zeroth-order algorithms. It is important to understand the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, especially for multi-agent systems with time-varying communication networks. In this paper, we establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost in distributed algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here