Search Results for author: Zhaoxian Wu

Found 4 papers, 3 papers with code

Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment

1 code implementation16 Jul 2023 Xingrong Dong, Zhaoxian Wu, Qing Ling, Zhi Tian

But we prove that, even with a class of state-of-the-art robust aggregation rules, in an adversarial environment and in the presence of Byzantine participants, distributed online gradient descent can only achieve a linear adversarial regret bound, which is tight.

Decision Making

Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data

2 code implementations17 Sep 2020 Jie Peng, Zhaoxian Wu, Qing Ling, Tianyi Chen

We prove that the proposed method reaches a neighborhood of the optimal solution at a linear convergence rate and the learning error is determined by the number of Byzantine workers.

Federated Learning

Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks

no code implementations29 Dec 2019 Zhaoxian Wu, Qing Ling, Tianyi Chen, Georgios B. Giannakis

This motivates us to reduce the variance of stochastic gradients as a means of robustifying SGD in the presence of Byzantine attacks.

Communication-Censored Distributed Stochastic Gradient Descent

1 code implementation9 Sep 2019 Weiyu Li, Tianyi Chen, Liping Li, Zhaoxian Wu, Qing Ling

Specifically, in CSGD, the latest mini-batch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative.

Quantization Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.