Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

15 Mar 2018Shen-Yi ZhaoGong-Duo ZhangMing-Wei LiWu-Jun Li

Distributed sparse learning with a cluster of multiple machines has attracted much attention in machine learning, especially for large-scale applications with high-dimensional data. One popular way to implement sparse learning is to use $L_1$ regularization... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.