Distributed Methods

Gradient Sparsification

Introduced by Wangni et al. in Gradient Sparsification for Communication-Efficient Distributed Optimization

Gradient Sparsification is a technique for distributed training that sparsifies stochastic gradients to reduce the communication cost, with minor increase in the number of iterations. The key idea behind our sparsification technique is to drop some coordinates of the stochastic gradient and appropriately amplify the remaining coordinates to ensure the unbiasedness of the sparsified stochastic gradient. The sparsification approach can significantly reduce the coding length of the stochastic gradient and only slightly increase the variance of the stochastic gradient.

Source: Gradient Sparsification for Communication-Efficient Distributed Optimization

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Federated Learning 10 45.45%
Quantization 3 13.64%
BIG-bench Machine Learning 2 9.09%
Fairness 2 9.09%
XLM-R 1 4.55%
Classification 1 4.55%
General Classification 1 4.55%
Adversarial Defense 1 4.55%
Adversarial Robustness 1 4.55%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories