1 code implementation • 11 Feb 2025 • Sicheng Wang, Jianhua Shan, Jianwei Zhang, Haozhang Gao, Hailiang Han, Yipeng chen, Kang Wei, Chengkun Zhang, Kairos Wong, Jie Zhao, Lei Zhao, Bin Fang
To tackle this issue, we present RoboBERT, a novel end-to-end robotic manipulation model integrated with a unique training strategy.
no code implementations • 22 Sep 2024 • YiFan Li, Kang Wei, Linqiong Jia, Jun Zou, Feng Shu, Yaoliang Song, Jiangzhou Wang
Second, in order to improve the DOA estimation accuracy of switches-based hybrid arrays, a deep neural network (DNN)-based method called ASN-DNN is proposed.
no code implementations • 26 Aug 2024 • Shunfeng Chu, Jun Li, Jianxin Wang, Yiyang Ni, Kang Wei, Wen Chen, Shi Jin
We utilize the Lyapunov method to decouple the formulated problem into a series of one-slot optimization problems and develop a two-stage optimization algorithm to achieve the optimal transmission power control and IoT device scheduling strategies.
no code implementations • 14 Jun 2024 • YiFan Li, Feng Shu, Kang Wei, Jiatong Bai, Cunhua Pan, Yongpeng Wu, Yaoliang Song, Jiangzhou Wang
To obtain the exact NF DOA estimation results, we propose a grouped PC-HAD structure, which is capable of dividing the NF DOA estimation problem into multiple FF DOA estimation problems via partitioning the large-scale PC-HAD array into small-scale groups.
no code implementations • 28 May 2024 • Xiumei Deng, Jun Li, Kang Wei, Long Shi, Zeihui Xiong, Ming Ding, Wen Chen, Shi Jin, H. Vincent Poor
Driven by this issue, we propose a novel sparse FedAdam algorithm called FedAdam-SSM, wherein distributed devices sparsify the updates of local model parameters and moment estimates and subsequently upload the sparse representations to the centralized server.
no code implementations • 28 May 2024 • Xiumei Deng, Jun Li, Long Shi, Kang Wei, Ming Ding, Yumeng Shao, Wen Chen, Shi Jin
To promote the efficiency and trustworthiness of DT for wireless IIoT networks, we propose a blockchain-enabled DT (B-DT) framework that employs deep neural network (DNN) partitioning technique and reputation-based consensus mechanism, wherein the DTs maintained at the gateway side execute DNN inference tasks using the data collected from their associated IIoT devices.
1 code implementation • 21 May 2024 • Yuwen Qian, Shuchi Wu, Kang Wei, Ming Ding, Di Xiao, Tao Xiang, Chuan Ma, Song Guo
To tackle this issue, we dive into the fundamental mechanism of backdoor attacks on FSSL, proposing the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
no code implementations • 11 May 2024 • Yumeng Shao, Jun Li, Long Shi, Kang Wei, Ming Ding, Qianmu Li, Zengxiang Li, Wen Chen, Shi Jin
To evaluate the learning performance of T-SFL, we provide an upper bound on the global loss function.
no code implementations • 9 May 2024 • Jun Li, Weiwei Zhang, Kang Wei, Guangji Chen, Long Shi, Wen Chen
In practical wireless systems, the communication links among nodes are usually unreliable due to wireless fading and receiver noise, consequently resulting in performance degradation of GNNs.
1 code implementation • 25 Apr 2024 • Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao
Zero-shot learning has consistently yielded remarkable progress via modeling nuanced one-to-one visual-attribute correlation.
no code implementations • 27 Dec 2023 • Xin Yuan, Ning li, Kang Wei, Wenchao Xu, Quan Chen, Hao Chen, Song Guo
The model segmentation without user mobility has been investigated deeply by previous works.
1 code implementation • 1 Dec 2023 • Shuchi Wu, Chuan Ma, Kang Wei, Xiaogang Xu, Ming Ding, Yuwen Qian, Tao Xiang
This paper introduces RDA, a pioneering approach designed to address two primary deficiencies prevalent in previous endeavors aiming at stealing pre-trained encoders: (1) suboptimal performances attributed to biased optimization objectives, and (2) elevated query costs stemming from the end-to-end paradigm that necessitates querying the target encoder every epoch.
1 code implementation • 23 Nov 2023 • Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Qihua Zhou, Jie Zhang, Kang Wei, Chenxin Li, Song Guo
In this paper, we propose a simple yet effective Attribute-Aware Representation Rectification framework for GZSL, dubbed $\mathbf{(AR)^{2}}$, to adaptively rectify the feature extractor to learn novel features while keeping original valuable features.
no code implementations • 13 Oct 2023 • Jixuan Cui, Jun Li, Zhen Mei, Kang Wei, Sha Wei, Ming Ding, Wen Chen, Song Guo
However, the domain discrepancy and data scarcity problems among clients deteriorate the performance of the global FL model.
no code implementations • 4 Aug 2023 • Xuefeng Han, Jun Li, Wen Chen, Zhen Mei, Kang Wei, Ming Ding, H. Vincent Poor
With the rapid proliferation of smart mobile devices, federated learning (FL) has been widely considered for application in wireless networks for distributed model training.
no code implementations • 2 May 2023 • Yifan Shi, Kang Wei, Li Shen, Jun Li, Xueqian Wang, Bo Yuan, Song Guo
However, it suffers from issues in terms of communication, resource of MTs, and privacy.
1 code implementation • 1 May 2023 • Yifan Shi, Kang Wei, Li Shen, Yingqi Liu, Xueqian Wang, Bo Yuan, DaCheng Tao
To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise.
no code implementations • 9 Apr 2023 • Shunfeng Chu, Jun Li, Kang Wei, Yuwen Qian, Kunlun Wang, Feng Shu, Wen Chen
In this paper, we design two-level incentive mechanisms for the HFL with a two-tiered computing structure to encourage the participation of entities in each tier in the HFL training.
no code implementations • 9 Apr 2023 • Kang Wei, Jun Li, Chuan Ma, Ming Ding, Feng Shu, Haitao Zhao, Wen Chen, Hongbo Zhu
Specifically, we first design a random sparsification algorithm to retain a fraction of the gradient elements in each client's local training, thereby mitigating the performance degradation induced by DP and and reducing the number of transmission parameters over wireless channels.
1 code implementation • CVPR 2023 • Yifan Shi, Yingqi Liu, Kang Wei, Li Shen, Xueqian Wang, DaCheng Tao
Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with better stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance.
no code implementations • 7 Mar 2023 • Xin Yuan, Wei Ni, Ming Ding, Kang Wei, Jun Li, H. Vincent Poor
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
no code implementations • 8 Feb 2023 • Yifan Shi, Li Shen, Kang Wei, Yan Sun, Bo Yuan, Xueqian Wang, DaCheng Tao
To mitigate the privacy leakages and communication burdens of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in a decentralized communication network.
no code implementations • 9 Feb 2022 • Kang Wei, Jun Li, Chuan Ma, Ming Ding, Sha Wei, Fan Wu, Guihai Chen, Thilina Ranbaduge
As a special architecture in FL, vertical FL (VFL) is capable of constructing a hyper ML model by embracing sub-models from different clients.
no code implementations • 20 Jun 2021 • Kang Wei, Jun Li, Chuan Ma, Ming Ding, Cailian Chen, Shi Jin, Zhu Han, H. Vincent Poor
Then, we convert the MAMAB to a max-min bipartite matching problem at each communication round, by estimating rewards with the upper confidence bound (UCB) approach.
1 code implementation • 10 May 2021 • Chuan Ma, Jun Li, Ming Ding, Kang Wei, Wen Chen, H. Vincent Poor
Owing to the low communication costs and privacy-promoting capabilities, Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
no code implementations • 28 Jan 2021 • Kang Wei, Jun Li, Ming Ding, Chuan Ma, Yo-Seb Jeon, H. Vincent Poor
An attacker in FL may control a number of participant clients, and purposely craft the uploaded model parameters to manipulate system outputs, namely, model poisoning (MP).
no code implementations • 18 Jan 2021 • Jun Li, Yumeng Shao, Kang Wei, Ming Ding, Chuan Ma, Long Shi, Zhu Han, H. Vincent Poor
Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
no code implementations • 2 Dec 2020 • Jun Li, Yumeng Shao, Ming Ding, Chuan Ma, Kang Wei, Zhu Han, H. Vincent Poor
The proposed BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
1 code implementation • 4 Jul 2020 • Chuan Ma, Jun Li, Ming Ding, Bo Liu, Kang Wei, Jian Weng, H. Vincent Poor
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
no code implementations • 11 Apr 2020 • Cheng Wang, Kang Wei, Lingjun Kong, Long Shi, Zhen Mei, Jun Li, Kui Cai
The error correcting performance of multi-level-cell (MLC) NAND flash memory is closely related to the block length of error correcting codes (ECCs) and log-likelihood-ratios (LLRs) of the read-voltage thresholds.
no code implementations • 29 Feb 2020 • Kang Wei, Jun Li, Ming Ding, Chuan Ma, Hang Su, Bo Zhang, H. Vincent Poor
According to our analysis, the UDP framework can realize $(\epsilon_{i}, \delta_{i})$-LDP for the $i$-th MT with adjustable privacy protection levels by varying the variances of the artificial noise processes.
no code implementations • 1 Nov 2019 • Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farokhi Farhad, Shi Jin, Tony Q. S. Quek, H. Vincent Poor
Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i. e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level.