Local SGD is a distributed training technique that runs SGD independently in parallel on different workers and averages the sequences only once in a while.
Source: Local SGD Converges Fast and Communicates LittlePaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Federated Learning | 25 | 49.02% |
Language Modeling | 4 | 7.84% |
Language Modelling | 4 | 7.84% |
Image Classification | 4 | 7.84% |
Edge-computing | 3 | 5.88% |
BIG-bench Machine Learning | 2 | 3.92% |
Blocking | 2 | 3.92% |
Multi-Task Learning | 1 | 1.96% |
Large Language Model | 1 | 1.96% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |