no code implementations • 16 Feb 2024 • Richeng Jin, Yujie Gu, Kai Yue, Xiaofan He, Zhaoyang Zhang, Huaiyu Dai
In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
no code implementations • 3 Aug 2023 • Md Ferdous Pervej, Richeng Jin, Huaiyu Dai
While a practical wireless network has many tiers where end users do not directly communicate with the central server, the users' devices have limited computation and battery powers, and the serving base station (BS) has a fixed bandwidth.
1 code implementation • 22 May 2023 • Zhuojun Tian, Zhaoyang Zhang, Zhaohui Yang, Richeng Jin, Huaiyu Dai
In conventional distributed learning over a network, multiple agents collaboratively build a common machine learning model.
no code implementations • 19 Feb 2023 • Richeng Jin, Xiaofan He, Caijun Zhong, Zhaoyang Zhang, Tony Quek, Huaiyu Dai
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks.
no code implementations • NeurIPS 2023 • Richeng Jin, Zhonggen Su, Caijun Zhong, Zhaoyang Zhang, Tony Quek, Huaiyu Dai
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
no code implementations • 27 Oct 2022 • Md Ferdous Pervej, Richeng Jin, Huaiyu Dai
This paper proposes a vehicular edge federated learning (VEFL) solution, where an edge server leverages highly mobile connected vehicles' (CVs') onboard central processing units (CPUs) and local datasets to train a global model.
no code implementations • 8 Jul 2022 • Zhuoran Xiao, Zhaoyang Zhang, Zirui Chen, Zhaohui Yang, Richeng Jin
Through exploring the intrinsic correlation among a set of historical CSI instances randomly obtained in a certain communication environment, channel prediction can significantly increase CSI accuracy and save signaling overhead.
no code implementations • 8 Jun 2022 • Kai Yue, Richeng Jin, Chau-Wai Wong, Dror Baron, Huaiyu Dai
Prior work has shown that the gradient sharing strategies in federated learning can be vulnerable to data reconstruction attacks.
no code implementations • 7 Oct 2021 • Kai Yue, Richeng Jin, Ryan Pilgrim, Chau-Wai Wong, Dror Baron, Huaiyu Dai
The paradigm addresses the challenge of statistical heterogeneity by transmitting update data that are more expressive than those of the conventional FL paradigms.
1 code implementation • 6 Oct 2021 • Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai
Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy.
1 code implementation • 2 Aug 2021 • Kai Yue, Richeng Jin, Chau-Wai Wong, Huaiyu Dai
In each communication round, we select the predictor and quantizer based on the rate-distortion cost, and further reduce the redundancy with entropy coding.
no code implementations • 15 Apr 2020 • Richeng Jin, Xiaofan He, Huaiyu Dai
Moreover, most of the existing works assume Channel State Information (CSI) available at both the mobile devices and the parameter server, and thus the mobile devices can adopt fixed transmission rates dictated by the channel capacity.
no code implementations • 25 Feb 2020 • Richeng Jin, Yufan Huang, Xiaofan He, Huaiyu Dai, Tianfu Wu
We present Stochastic-Sign SGD which utilizes novel stochastic-sign based gradient compressors enabling the aforementioned properties in a unified framework.
no code implementations • 27 Feb 2019 • Richeng Jin, Xiaofan He, Huaiyu Dai
The recent advances in sensor technologies and smart devices enable the collaborative collection of a sheer volume of data from multiple information sources.
no code implementations • 8 Sep 2018 • Richeng Jin, Xiaofan He, Huaiyu Dai
While machine learning has achieved remarkable results in a wide variety of domains, the training of models often requires large datasets that may need to be collected from different individuals.