Distributed Optimization

85 papers with code • 1 benchmarks • 0 datasets

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Libraries

Use these libraries to find Distributed Optimization models and implementations
3 papers
91
2 papers
5,895
2 papers
191

Most implemented papers

Federated Optimization in Heterogeneous Networks

litian96/FedProx 14 Dec 2018

Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).

SCAFFOLD: Stochastic Controlled Averaging for Federated Learning

TsingZ0/PFL-Non-IID ICML 2020

We obtain tight convergence rates for FedAvg and prove that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.

ZOOpt: Toolbox for Derivative-Free Optimization

eyounx/ZOOpt 31 Dec 2017

Recent advances in derivative-free optimization allow efficient approximation of the global-optimal solutions of sophisticated functions, such as functions with many local optima, non-differentiable and non-continuous functions.

Secure Distributed Training at Scale

yandex-research/btard 21 Jun 2021

Training such models requires a lot of computational resources (e. g., HPC clusters) that are not available to small research groups and independent researchers.

Power Bundle Adjustment for Large-Scale 3D Reconstruction

nikolausdemmel/rootba CVPR 2023

We demonstrate that employing the proposed Power Bundle Adjustment as a sub-problem solver significantly improves speed and accuracy of the distributed optimization.

L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework

gingsmith/proxcocoa 13 Dec 2015

Despite the importance of sparsity in many large-scale applications, there are few methods for distributed optimization of sparsity-inducing objectives.

CoCoA: A General Framework for Communication-Efficient Distributed Optimization

gingsmith/cocoa 7 Nov 2016

The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning.

Robust Learning from Untrusted Sources

NikolaKon1994/Robust-Learning-from-Untrusted-Sources 29 Jan 2019

Modern machine learning methods often require more data for training than a single expert can provide.

Federated Learning: Challenges, Methods, and Future Directions

AshwinRJ/Federated-Learning-PyTorch 21 Aug 2019

Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.

SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum

facebookresearch/fairscale ICLR 2020

We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses.