Search Results for author: Richeng Jin

Found 15 papers, 3 papers with code

TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data

no code implementations16 Feb 2024 Richeng Jin, Yujie Gu, Kai Yue, Xiaofan He, Zhaoyang Zhang, Huaiyu Dai

In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.

Distributed Optimization

Hierarchical Federated Learning in Wireless Networks: Pruning Tackles Bandwidth Scarcity and System Heterogeneity

no code implementations3 Aug 2023 Md Ferdous Pervej, Richeng Jin, Huaiyu Dai

While a practical wireless network has many tiers where end users do not directly communicate with the central server, the users' devices have limited computation and battery powers, and the serving base station (BS) has a fixed bandwidth.

Federated Learning

Distributed Learning over Networks with Graph-Attention-Based Personalization

1 code implementation22 May 2023 Zhuojun Tian, Zhaoyang Zhang, Zhaohui Yang, Richeng Jin, Huaiyu Dai

In conventional distributed learning over a network, multiple agents collaboratively build a common machine learning model.

Graph Attention

Breaking the Communication-Privacy-Accuracy Tradeoff with $f$-Differential Privacy

no code implementations NeurIPS 2023 Richeng Jin, Zhonggen Su, Caijun Zhong, Zhaoyang Zhang, Tony Quek, Huaiyu Dai

We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.

Data Compression Federated Learning

Resource Constrained Vehicular Edge Federated Learning with Highly Mobile Connected Vehicles

no code implementations27 Oct 2022 Md Ferdous Pervej, Richeng Jin, Huaiyu Dai

This paper proposes a vehicular edge federated learning (VEFL) solution, where an edge server leverages highly mobile connected vehicles' (CVs') onboard central processing units (CPUs) and local datasets to train a global model.

Federated Learning

Mobile MIMO Channel Prediction with ODE-RNN: a Physics-Inspired Adaptive Approach

no code implementations8 Jul 2022 Zhuoran Xiao, Zhaoyang Zhang, Zirui Chen, Zhaohui Yang, Richeng Jin

Through exploring the intrinsic correlation among a set of historical CSI instances randomly obtained in a certain communication environment, channel prediction can significantly increase CSI accuracy and save signaling overhead.

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

no code implementations8 Jun 2022 Kai Yue, Richeng Jin, Chau-Wai Wong, Dror Baron, Huaiyu Dai

Prior work has shown that the gradient sharing strategies in federated learning can be vulnerable to data reconstruction attacks.

Federated Learning Image Classification +3

Neural Tangent Kernel Empowered Federated Learning

no code implementations7 Oct 2021 Kai Yue, Richeng Jin, Ryan Pilgrim, Chau-Wai Wong, Dror Baron, Huaiyu Dai

The paradigm addresses the challenge of statistical heterogeneity by transmitting update data that are more expressive than those of the conventional FL paradigms.

Federated Learning Privacy Preserving

Federated Learning via Plurality Vote

1 code implementation6 Oct 2021 Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai

Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy.

Federated Learning Quantization

Communication-Efficient Federated Learning via Predictive Coding

1 code implementation2 Aug 2021 Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai

In each communication round, we select the predictor and quantizer based on the rate-distortion cost, and further reduce the redundancy with entropy coding.

Data Compression Federated Learning +1

Communication Efficient Federated Learning with Energy Awareness over Wireless Networks

no code implementations15 Apr 2020 Richeng Jin, Xiaofan He, Huaiyu Dai

Moreover, most of the existing works assume Channel State Information (CSI) available at both the mobile devices and the parameter server, and thus the mobile devices can adopt fixed transmission rates dictated by the channel capacity.

Federated Learning

Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

no code implementations25 Feb 2020 Richeng Jin, Yufan Huang, Xiaofan He, Huaiyu Dai, Tianfu Wu

We present Stochastic-Sign SGD which utilizes novel stochastic-sign based gradient compressors enabling the aforementioned properties in a unified framework.

Federated Learning Quantization

Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data

no code implementations27 Feb 2019 Richeng Jin, Xiaofan He, Huaiyu Dai

The recent advances in sensor technologies and smart devices enable the collaborative collection of a sheer volume of data from multiple information sources.

BIG-bench Machine Learning

Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent

no code implementations8 Sep 2018 Richeng Jin, Xiaofan He, Huaiyu Dai

While machine learning has achieved remarkable results in a wide variety of domains, the training of models often requires large datasets that may need to be collected from different individuals.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.