Distributed Optimization

76 papers with code • 0 benchmarks • 0 datasets

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Libraries

Use these libraries to find Distributed Optimization models and implementations
3 papers
90
2 papers
4,247
2 papers
175

Most implemented papers

Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices

yandex-research/moshpit-sgd NeurIPS 2021

Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes.

DeepLM: Large-Scale Nonlinear Least Squares on Deep Learning Frameworks Using Stochastic Domain Decomposition

hjwdzh/DeepLM CVPR 2021

We propose a novel approach for large-scale nonlinear least squares problems based on deep learning frameworks.

Power Bundle Adjustment for Large-Scale 3D Reconstruction

nikolausdemmel/rootba CVPR 2023

We demonstrate that employing the proposed Power Bundle Adjustment as a sub-problem solver significantly improves speed and accuracy of the distributed optimization.

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

dat-2022/dat 13 Jun 2022

Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.

Communication Efficient Distributed Optimization using an Approximate Newton-type Method

DAve-QN/source 30 Dec 2013

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems.

Adding vs. Averaging in Distributed Primal-Dual Optimization

gingsmith/cocoa 12 Feb 2015

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck.

Distributed Optimization with Arbitrary Local Solvers

optml/CoCoA 13 Dec 2015

To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.

Accelerating Exact and Approximate Inference for (Distributed) Discrete Optimization with GPUs

nandofioretto/GpuBE 18 Aug 2016

Discrete optimization is a central problem in artificial intelligence.

Optimization for Large-Scale Machine Learning with Distributed Features and Observations

anathan90/RADiSA 31 Oct 2016

As the size of modern data sets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributed computing.

Optimal algorithms for smooth and strongly convex distributed optimization in networks

adelnabli/dadao ICML 2017

For centralized (i. e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp.