no code implementations • 6 Apr 2022 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.
no code implementations • 13 Mar 2022 • Chaojin Qing, Qing Ye, Bin Cai, Wenhui Liu, Jiafan Wang
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, 1-bit compressed sensing (CS)-based superimposed channel state information (CSI) feedback has shown many advantages, while still faces many challenges, such as low accuracy of the downlink CSI recovery and large processing delays.
no code implementations • 20 Jan 2022 • Chaojin Qing, Qing Ye, Wenhui Liu, Jiafan Wang
Due to the discarding of downlink channel state information (CSI) amplitude and the employing of iteration reconstruction algorithms, 1-bit compressed sensing (CS)-based superimposed CSI feedback is challenged by low recovery accuracy and large processing delay.
no code implementations • 28 Jul 2021 • Chaojin Qing, Shuhai Tang, Chuangui Rao, Qing Ye, Jiafan Wang, Chuan Huang
Due to the nonlinear distortion in Orthogonal frequency division multiplexing (OFDM) systems, the timing synchronization (TS) performance is inevitably degraded at the receiver.
no code implementations • 3 May 2021 • Qiutong Guo, Shun Lei, Qing Ye, Zhiyang Fang
Bitcoin, one of the major cryptocurrencies, presents great opportunities and challenges with its tremendous potential returns accompanying high risks.
no code implementations • 3 May 2021 • Jindi Lv, Qing Ye, Yanan sun, Juan Zhao, Jiancheng Lv
In this paper, we propose a novel approach, Heart-Darts, to efficiently classify the ECG signals by automatically designing the CNN model with the differentiable architecture search (i. e., Darts, a cell-based neural architecture search method).
1 code implementation • 21 Apr 2021 • Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv
With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.
no code implementations • 22 Dec 2020 • Qing Ye, Weijun Xie
We prove that in the proposed framework, when the classification outcomes are known, the resulting problem, termed "unbiased subdata selection," is strongly polynomial-solvable and can be used to enhance the classification fairness by selecting more representative data points.
no code implementations • 6 Sep 2020 • Qing Ye, Yuxuan Han, Yanan sun, Jiancheng Lv
Synchronous methods are widely used in distributed training the Deep Neural Networks (DNNs).
1 code implementation • 6 Sep 2020 • Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv
While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.
1 code implementation • 23 Jul 2020 • Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv
Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.
no code implementations • 18 Jun 2019 • Qian Yue, Xinzhe Luo, Qing Ye, Lingchao Xu, Xiahai Zhuang
The proposed network, referred to as SRSCN, comprises a shape reconstruction neural network (SRNN) and a spatial constraint network (SCN).
no code implementations • 3 Sep 2018 • Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Qing Ye, Yuzhong Huang, Rui Tang, Stefan Leutenegger
Datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM).