no code implementations • 11 Mar 2024 • Chenhao Wang, Zihan Chen, Nikolaos Pappas, Howard H. Yang, Tony Q. S. Quek, H. Vincent Poor
In contrast, an Adam-like algorithm converges at the $\mathcal{O}( 1/T )$ rate, demonstrating its advantage in expediting the model training process.
no code implementations • 8 Feb 2024 • Xinyi Hu, Nikolaos Pappas, Howard H. Yang
Motivated by this, we introduce the novel concept of Version Age of Information (VAoI) to FL.
1 code implementation • NeurIPS 2023 • Zihan Chen, Howard H. Yang, Tony Q. S. Quek, Kai Fong Ernest Chong
Personalized federated learning (PFL) has been widely investigated to address the challenge of data heterogeneity, especially when a single generic model is inadequate in satisfying the diverse performance requirements of local clients simultaneously.
no code implementations • 16 Dec 2023 • Muhammad Azeem Khan, Howard H. Yang, Zihan Chen, Antonio Iera, Nikolaos Pappas
Federated Learning (FL) offers a solution by preserving data privacy during training.
no code implementations • 26 Oct 2023 • Xiang Chen, Zhiheng Guo, Xijun Wang, Howard H. Yang, Chenyuan Feng, Junshen Su, Sihui Zheng, Tony Q. S. Quek
Future wireless communication networks are in a position to move beyond data-centric, device-oriented connectivity and offer intelligent, immersive experiences based on task-oriented connections, especially in the context of the thriving development of pre-trained foundation models (PFM) and the evolving vision of 6G native artificial intelligence (AI).
no code implementations • 6 Oct 2023 • Zihan Chen, Howard H. Yang, Y. C. Tay, Kai Fong Ernest Chong, Tony Q. S. Quek
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
no code implementations • 5 Sep 2023 • Zhengrong Song, Chuan Ma, Ming Ding, Howard H. Yang, Yuwen Qian, Xiangwei Zhou
This work proposes a novel solution to address these challenges, namely personalized federated deep reinforcement learning (PF-DRL), for multi-UAV trajectory optimization.
no code implementations • 17 Jun 2023 • Zihan Chen, Howard H. Yang, Tony Q. S. Quek
Federated edge learning is envisioned as the bedrock of enabling intelligence in next-generation wireless networks, but the limited spectral resources often constrain its scalability.
no code implementations • 30 Mar 2023 • Yuxuan Zhang, Chao Xu, Howard H. Yang, Xijun Wang, Tony Q. S. Quek
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL) while concurrently coping with FL's data heterogeneity issue.
no code implementations • 19 Mar 2023 • Chaoqun You, Kun Guo, Howard H. Yang, Tony Q. S. Quek
Personalized Federated Learning (PFL) is a new Federated Learning (FL) paradigm, particularly tackling the heterogeneity issues brought by various mobile user equipments (UEs) in mobile edge computing (MEC) networks.
no code implementations • 24 Feb 2023 • Zihan Chen, Zeshen Li, Howard H. Yang, Tony Q. S. Quek
Additionally, we leverage a bi-level optimization framework to personalize the federated learning model so as to cope with the data heterogeneity issue.
no code implementations • 27 Sep 2022 • Chaoqun You, Daquan Feng, Kun Guo, Howard H. Yang, Tony Q. S. Quek
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss, in contrast to synchronous and asynchronous PFL algorithms.
no code implementations • 30 Jun 2022 • Zihan Chen, Songshang Liu, Hualiang Wang, Howard H. Yang, Tony Q. S. Quek, Zuozhu Liu
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
no code implementations • 17 Feb 2022 • Howard H. Yang, Zuozhu Liu, Yaru Fu, Tony Q. S. Quek, H. Vincent Poor
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems, in which a server and a host of clients collaboratively train a statistical model utilizing the data and computation resources of the clients without directly exposing their privacy-sensitive data.
no code implementations • 20 Aug 2021 • Chenyuan Feng, Howard H. Yang, Deshun Hu, Zhiwei Zhao, Tony Q. S. Quek, Geyong Min
Finally, we provide experiments to evaluate the learning performance of HFL and our MACFL.
no code implementations • 13 Apr 2021 • Chao Xu, Yiping Xie, Xijun Wang, Howard H. Yang, Dusit Niyato, Tony Q. S. Quek
cost), by integrating R-learning, a tabular reinforcement learning (RL) algorithm tailored for maximizing the long-term average reward, and traditional DRL algorithms, initially developed to optimize the discounted long-term cumulative reward rather than the average one.
1 code implementation • 5 Jul 2020 • Wenchao Xia, Tony Q. S. Quek, Kun Guo, Wanli Wen, Howard H. Yang, Hongbo Zhu
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
no code implementations • 1 Nov 2019 • Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farokhi Farhad, Shi Jin, Tony Q. S. Quek, H. Vincent Poor
Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i. e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level.
no code implementations • 31 Oct 2019 • Howard H. Yang, Ahmed Arafa, Tony Q. S. Quek, H. Vincent Poor
Federated learning (FL) is a machine learning model that preserves data privacy in the training process.
Information Theory Signal Processing Information Theory
no code implementations • 17 Aug 2019 • Howard H. Yang, Zuozhu Liu, Tony Q. S. Quek, H. Vincent Poor
Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration.
Information Theory Signal Processing Information Theory