# D$^2$: Decentralized Training over Decentralized Data

19 Mar 2018Hanlin TangXiangru LianMing YanCe ZhangJi Liu

While training a machine learning model using multiple workers, each of which collects data from their own data sources, it would be most useful when the data collected from different workers can be {\em unique} and {\em different}. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are {\em not too different}... (read more)

PDF Abstract

# Code Add Remove

No code implementations yet. Submit your code now

# Results from the Paper Add Remove

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

# Methods used in the Paper Add Remove

METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet