Distributed Optimization
77 papers with code • 0 benchmarks • 0 datasets
The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.
Source: Analysis of Distributed StochasticDual Coordinate Ascent
Benchmarks
These leaderboards are used to track progress in Distributed Optimization
Libraries
Use these libraries to find Distributed Optimization models and implementationsLatest papers with no code
Distributed Maximum Consensus over Noisy Links
We introduce a distributed algorithm, termed noise-robust distributed maximum consensus (RD-MC), for estimating the maximum value within a multi-agent network in the presence of noisy communication links.
Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization
Community battery systems have been widely deployed to provide services to the grid.
Quantization Avoids Saddle Points in Distributed Optimization
More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization.
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction
These methods replace the outer loop with probabilistic gradient computation triggered by a coin flip in each iteration, ensuring simpler proofs, efficient hyperparameter selection, and sharp convergence guarantees.
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical.
MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods
Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system.
Privacy-Preserving Distributed Optimization and Learning
We first discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation and indicate their pros and cons for privacy protection in distributed optimization and learning.
Distributed Momentum Methods Under Biased Gradient Estimations
In this work, we establish non-asymptotic convergence bounds on distributed momentum methods under biased gradient estimation on both general non-convex and $\mu$-PL non-convex problems.
TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data
In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
A Survey of Resilient Coordination for Cyber-Physical Systems Against Malicious Attacks
Furthermore, miscellaneous resilient coordination problems are discussed in this survey.