Browse > Methodology > Stochastic Optimization > Distributed Optimization

Distributed Optimization

19 papers with code · Methodology

Leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

CoCoA: A General Framework for Communication-Efficient Distributed Optimization

7 Nov 2016gingsmith/cocoa

The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning.

DISTRIBUTED OPTIMIZATION

L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework

13 Dec 2015gingsmith/cocoa

Despite the importance of sparsity in many large-scale applications, there are few methods for distributed optimization of sparsity-inducing objectives.

DISTRIBUTED OPTIMIZATION

Adding vs. Averaging in Distributed Primal-Dual Optimization

12 Feb 2015gingsmith/cocoa

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck.

DISTRIBUTED OPTIMIZATION

Federated Optimization in Heterogeneous Networks

14 Dec 2018litian96/FedProx

Federated Learning is a distributed learning paradigm with two key challenges that differentiate it from traditional distributed optimization: (1) significant variability in terms of the systems characteristics on each device in the network (systems heterogeneity), and (2) non-identically distributed data across the network (statistical heterogeneity).

DISTRIBUTED OPTIMIZATION

Sparsified SGD with Memory

NeurIPS 2018 epfml/sparsifiedSGD

Huge scale machine learning problems are nowadays tackled by distributed optimization algorithms, i. e. algorithms that leverage the compute power of many devices for training.

DISTRIBUTED OPTIMIZATION QUANTIZATION

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

NeurIPS 2019 epfml/powersgd

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization.

DISTRIBUTED OPTIMIZATION

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

NeurIPS 2019 epfml/powersgd

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization.

DISTRIBUTED OPTIMIZATION

Distributed Optimization with Arbitrary Local Solvers

13 Dec 2015optml/CoCoA

To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.

DISTRIBUTED OPTIMIZATION

Robust Learning from Untrusted Sources

29 Jan 2019NikolaKon1994/Robust-Learning-from-Untrusted-Sources

Modern machine learning methods often require more data for training than a single expert can provide.

DISTRIBUTED OPTIMIZATION

Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction

12 Sep 2019liboyue/Network-Distributed-Algorithm

There is growing interest in large-scale machine learning and optimization over decentralized networks, e. g. in the context of multi-agent learning and federated learning.

DISTRIBUTED OPTIMIZATION