Distributed Optimization

77 papers with code • 0 benchmarks • 0 datasets

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Libraries

Use these libraries to find Distributed Optimization models and implementations
3 papers
90
2 papers
4,027
2 papers
173

Latest papers with no code

Distributed Maximum Consensus over Noisy Links

no code yet • 27 Mar 2024

We introduce a distributed algorithm, termed noise-robust distributed maximum consensus (RD-MC), for estimating the maximum value within a multi-agent network in the presence of noisy communication links.

Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization

no code yet • 20 Mar 2024

Community battery systems have been widely deployed to provide services to the grid.

Quantization Avoids Saddle Points in Distributed Optimization

no code yet • 15 Mar 2024

More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization.

Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction

no code yet • 11 Mar 2024

These methods replace the outer loop with probabilistic gradient computation triggered by a coin flip in each iteration, ensuring simpler proofs, efficient hyperparameter selection, and sharp convergence guarantees.

LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression

no code yet • 7 Mar 2024

In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical.

MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods

no code yet • 5 Mar 2024

Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system.

Privacy-Preserving Distributed Optimization and Learning

no code yet • 29 Feb 2024

We first discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation and indicate their pros and cons for privacy protection in distributed optimization and learning.

Distributed Momentum Methods Under Biased Gradient Estimations

no code yet • 29 Feb 2024

In this work, we establish non-asymptotic convergence bounds on distributed momentum methods under biased gradient estimation on both general non-convex and $\mu$-PL non-convex problems.

TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data

no code yet • 16 Feb 2024

In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.

A Survey of Resilient Coordination for Cyber-Physical Systems Against Malicious Attacks

no code yet • 16 Feb 2024

Furthermore, miscellaneous resilient coordination problems are discussed in this survey.