Distributed Optimization
85 papers with code • 1 benchmarks • 0 datasets
The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.
Source: Analysis of Distributed StochasticDual Coordinate Ascent
Libraries
Use these libraries to find Distributed Optimization models and implementationsMost implemented papers
Federated Optimization in Heterogeneous Networks
Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
We obtain tight convergence rates for FedAvg and prove that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.
ZOOpt: Toolbox for Derivative-Free Optimization
Recent advances in derivative-free optimization allow efficient approximation of the global-optimal solutions of sophisticated functions, such as functions with many local optima, non-differentiable and non-continuous functions.
Secure Distributed Training at Scale
Training such models requires a lot of computational resources (e. g., HPC clusters) that are not available to small research groups and independent researchers.
Power Bundle Adjustment for Large-Scale 3D Reconstruction
We demonstrate that employing the proposed Power Bundle Adjustment as a sub-problem solver significantly improves speed and accuracy of the distributed optimization.
L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework
Despite the importance of sparsity in many large-scale applications, there are few methods for distributed optimization of sparsity-inducing objectives.
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning.
Robust Learning from Untrusted Sources
Modern machine learning methods often require more data for training than a single expert can provide.
Federated Learning: Challenges, Methods, and Future Directions
Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.
SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum
We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses.