About

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Federated Learning: Challenges, Methods, and Future Directions

21 Aug 2019AshwinRJ/Federated-Learning-PyTorch

Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.

DISTRIBUTED OPTIMIZATION FEDERATED LEARNING

Federated Optimization in Heterogeneous Networks

14 Dec 2018litian96/FedProx

Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).

DISTRIBUTED OPTIMIZATION FEDERATED LEARNING

MANGO: A Python Library for Parallel Hyperparameter Tuning

22 May 2020ARM-software/mango

Tuning hyperparameters for machine learning algorithms is a tedious task, one that is typically done manually.

DISTRIBUTED COMPUTING DISTRIBUTED OPTIMIZATION HYPERPARAMETER OPTIMIZATION

CoCoA: A General Framework for Communication-Efficient Distributed Optimization

7 Nov 2016gingsmith/cocoa

The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning.

DISTRIBUTED COMPUTING DISTRIBUTED OPTIMIZATION

L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework

13 Dec 2015gingsmith/cocoa

Despite the importance of sparsity in many large-scale applications, there are few methods for distributed optimization of sparsity-inducing objectives.

DISTRIBUTED OPTIMIZATION

Adding vs. Averaging in Distributed Primal-Dual Optimization

12 Feb 2015gingsmith/cocoa

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck.

DISTRIBUTED OPTIMIZATION

Training Large Neural Networks with Constant Memory using a New Execution Algorithm

13 Feb 2020TezRomacH/layer-to-layer-pytorch

By running the optimizer in the host EPS, we show a new form of mixed precision for faster throughput and convergence.

DISTRIBUTED OPTIMIZATION NEURAL ARCHITECTURE SEARCH

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

NeurIPS 2019 epfml/powersgd

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization.

DISTRIBUTED OPTIMIZATION

Sparsified SGD with Memory

NeurIPS 2018 epfml/sparsifiedSGD

Huge scale machine learning problems are nowadays tackled by distributed optimization algorithms, i. e. algorithms that leverage the compute power of many devices for training.

DISTRIBUTED OPTIMIZATION QUANTIZATION

Distributed Optimization with Arbitrary Local Solvers

13 Dec 2015optml/CoCoA

To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.

DISTRIBUTED OPTIMIZATION