1 code implementation • 16 Jul 2023 • Xingrong Dong, Zhaoxian Wu, Qing Ling, Zhi Tian
But we prove that, even with a class of state-of-the-art robust aggregation rules, in an adversarial environment and in the presence of Byzantine participants, distributed online gradient descent can only achieve a linear adversarial regret bound, which is tight.
2 code implementations • 17 Sep 2020 • Jie Peng, Zhaoxian Wu, Qing Ling, Tianyi Chen
We prove that the proposed method reaches a neighborhood of the optimal solution at a linear convergence rate and the learning error is determined by the number of Byzantine workers.
no code implementations • 29 Dec 2019 • Zhaoxian Wu, Qing Ling, Tianyi Chen, Georgios B. Giannakis
This motivates us to reduce the variance of stochastic gradients as a means of robustifying SGD in the presence of Byzantine attacks.
1 code implementation • 9 Sep 2019 • Weiyu Li, Tianyi Chen, Liping Li, Zhaoxian Wu, Qing Ling
Specifically, in CSGD, the latest mini-batch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative.