Search Results for author: Xiaofan He

Found 8 papers, 0 papers with code

TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data

no code implementations16 Feb 2024 Richeng Jin, Yujie Gu, Kai Yue, Xiaofan He, Zhaoyang Zhang, Huaiyu Dai

In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.

Distributed Optimization

Graph Neural Network Based Node Deployment for Throughput Enhancement

no code implementations19 Aug 2022 Yifei Yang, Dongmian Zou, Xiaofan He

Besides, we show that an expressive GNN has the capacity to approximate both the function value and the gradients of a multivariate permutation-invariant function, as a theoretic support to the proposed method.

Communication Efficient Federated Learning with Energy Awareness over Wireless Networks

no code implementations15 Apr 2020 Richeng Jin, Xiaofan He, Huaiyu Dai

Moreover, most of the existing works assume Channel State Information (CSI) available at both the mobile devices and the parameter server, and thus the mobile devices can adopt fixed transmission rates dictated by the channel capacity.

Federated Learning

Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

no code implementations25 Feb 2020 Richeng Jin, Yufan Huang, Xiaofan He, Huaiyu Dai, Tianfu Wu

We present Stochastic-Sign SGD which utilizes novel stochastic-sign based gradient compressors enabling the aforementioned properties in a unified framework.

Federated Learning Quantization

Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data

no code implementations27 Feb 2019 Richeng Jin, Xiaofan He, Huaiyu Dai

The recent advances in sensor technologies and smart devices enable the collaborative collection of a sheer volume of data from multiple information sources.

BIG-bench Machine Learning

Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent

no code implementations8 Sep 2018 Richeng Jin, Xiaofan He, Huaiyu Dai

While machine learning has achieved remarkable results in a wide variety of domains, the training of models often requires large datasets that may need to be collected from different individuals.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.