Distributed Optimization
77 papers with code • 0 benchmarks • 0 datasets
The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.
Source: Analysis of Distributed StochasticDual Coordinate Ascent
Benchmarks
These leaderboards are used to track progress in Distributed Optimization
Libraries
Use these libraries to find Distributed Optimization models and implementationsMost implemented papers
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes.
DeepLM: Large-Scale Nonlinear Least Squares on Deep Learning Frameworks Using Stochastic Domain Decomposition
We propose a novel approach for large-scale nonlinear least squares problems based on deep learning frameworks.
Power Bundle Adjustment for Large-Scale 3D Reconstruction
We demonstrate that employing the proposed Power Bundle Adjustment as a sub-problem solver significantly improves speed and accuracy of the distributed optimization.
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.
Communication Efficient Distributed Optimization using an Approximate Newton-type Method
We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems.
Adding vs. Averaging in Distributed Primal-Dual Optimization
Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck.
Distributed Optimization with Arbitrary Local Solvers
To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.
Accelerating Exact and Approximate Inference for (Distributed) Discrete Optimization with GPUs
Discrete optimization is a central problem in artificial intelligence.
Optimization for Large-Scale Machine Learning with Distributed Features and Observations
As the size of modern data sets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributed computing.
Distributed Optimization of Multi-Class SVMs
Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way.