1 code implementation • 13 Oct 2023 • Mingjia Shi, Yuhao Zhou, Kai Wang, Huaizheng Zhang, Shudong Huang, Qing Ye, Jiangcheng Lv
Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data.
no code implementations • 10 Oct 2023 • Peng Di, Jianguo Li, Hang Yu, Wei Jiang, Wenting Cai, Yang Cao, Chaoyu Chen, Dajun Chen, Hongwei Chen, Liang Chen, Gang Fan, Jie Gong, Zi Gong, Wen Hu, Tingting Guo, Zhichao Lei, Ting Li, Zheng Li, Ming Liang, Cong Liao, Bingchang Liu, Jiachen Liu, Zhiwei Liu, Shaojun Lu, Min Shen, Guangpei Wang, Huan Wang, Zhi Wang, Zhaogui Xu, Jiawei Yang, Qing Ye, Gehao Zhang, Yu Zhang, Zelin Zhao, Xunjin Zheng, Hailian Zhou, Lifu Zhu, Xianying Zhu
It is specifically designed for code-related tasks with both English and Chinese prompts and supports over 40 programming languages.
no code implementations • 4 Sep 2023 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Yuanxi Li, Qing Ye, Jiancheng Lv
However, a significant challenge arises when coordinating FL with crowd intelligence which diverse client groups possess disparate objectives due to data heterogeneity or distinct tasks.
no code implementations • ICCV 2023 • Yuhao Zhou, Mingjia Shi, Yuanxi Li, Qing Ye, Yanan sun, Jiancheng Lv
Reducing communication overhead in federated learning (FL) is challenging but crucial for large-scale distributed privacy-preserving machine learning.
no code implementations • 21 Feb 2023 • Chaojin Qing, Qing Ye, Wenhui Liu, Zilong Wanga, Jiafan Wang, Jinliang Chen
Specifically, for the G2U CSI in NLoS, a CSI recovery network (CSI-RecNet) and superimposed interference cancellation are developed to recover the G2U CSI and U2G data.
no code implementations • 19 Nov 2022 • Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv
Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.
Ranked #1 on Image Classification on Fashion-MNIST (Accuracy metric)
no code implementations • 6 Apr 2022 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.
no code implementations • 13 Mar 2022 • Chaojin Qing, Qing Ye, Bin Cai, Wenhui Liu, Jiafan Wang
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, 1-bit compressed sensing (CS)-based superimposed channel state information (CSI) feedback has shown many advantages, while still faces many challenges, such as low accuracy of the downlink CSI recovery and large processing delays.
no code implementations • 20 Jan 2022 • Chaojin Qing, Qing Ye, Wenhui Liu, Jiafan Wang
Due to the discarding of downlink channel state information (CSI) amplitude and the employing of iteration reconstruction algorithms, 1-bit compressed sensing (CS)-based superimposed CSI feedback is challenged by low recovery accuracy and large processing delay.
no code implementations • 28 Jul 2021 • Chaojin Qing, Shuhai Tang, Chuangui Rao, Qing Ye, Jiafan Wang, Chuan Huang
Due to the nonlinear distortion in Orthogonal frequency division multiplexing (OFDM) systems, the timing synchronization (TS) performance is inevitably degraded at the receiver.
no code implementations • 3 May 2021 • Jindi Lv, Qing Ye, Yanan sun, Juan Zhao, Jiancheng Lv
In this paper, we propose a novel approach, Heart-Darts, to efficiently classify the ECG signals by automatically designing the CNN model with the differentiable architecture search (i. e., Darts, a cell-based neural architecture search method).
no code implementations • 3 May 2021 • Qiutong Guo, Shun Lei, Qing Ye, Zhiyang Fang
Bitcoin, one of the major cryptocurrencies, presents great opportunities and challenges with its tremendous potential returns accompanying high risks.
1 code implementation • 21 Apr 2021 • Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv
With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.
no code implementations • 22 Dec 2020 • Qing Ye, Weijun Xie
We prove that in the proposed framework, when the classification outcomes are known, the resulting problem, termed "unbiased subdata selection," is strongly polynomial-solvable and can be used to enhance the classification fairness by selecting more representative data points.
no code implementations • 6 Sep 2020 • Qing Ye, Yuxuan Han, Yanan sun, Jiancheng Lv
Synchronous methods are widely used in distributed training the Deep Neural Networks (DNNs).
1 code implementation • 6 Sep 2020 • Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv
While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.
1 code implementation • 23 Jul 2020 • Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv
Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.
no code implementations • 18 Jun 2019 • Qian Yue, Xinzhe Luo, Qing Ye, Lingchao Xu, Xiahai Zhuang
The proposed network, referred to as SRSCN, comprises a shape reconstruction neural network (SRNN) and a spatial constraint network (SCN).
no code implementations • 3 Sep 2018 • Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Qing Ye, Yuzhong Huang, Rui Tang, Stefan Leutenegger
Datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM).