Search Results for author: Howard H. Yang

Found 20 papers, 2 papers with code

Adaptive Federated Learning Over the Air

no code implementations11 Mar 2024 Chenhao Wang, Zihan Chen, Nikolaos Pappas, Howard H. Yang, Tony Q. S. Quek, H. Vincent Poor

In contrast, an Adam-like algorithm converges at the $\mathcal{O}( 1/T )$ rate, demonstrating its advantage in expediting the model training process.

Federated Learning

Spectral Co-Distillation for Personalized Federated Learning

1 code implementation NeurIPS 2023 Zihan Chen, Howard H. Yang, Tony Q. S. Quek, Kai Fong Ernest Chong

Personalized federated learning (PFL) has been widely investigated to address the challenge of data heterogeneity, especially when a single generic model is inadequate in satisfying the diverse performance requirements of local clients simultaneously.

Personalized Federated Learning

Foundation Model Based Native AI Framework in 6G with Cloud-Edge-End Collaboration

no code implementations26 Oct 2023 Xiang Chen, Zhiheng Guo, Xijun Wang, Howard H. Yang, Chenyuan Feng, Junshen Su, Sihui Zheng, Tony Q. S. Quek

Future wireless communication networks are in a position to move beyond data-centric, device-oriented connectivity and offer intelligent, immersive experiences based on task-oriented connections, especially in the context of the thriving development of pre-trained foundation models (PFM) and the evolving vision of 6G native artificial intelligence (AI).

The Role of Federated Learning in a Wireless World with Foundation Models

no code implementations6 Oct 2023 Zihan Chen, Howard H. Yang, Y. C. Tay, Kai Fong Ernest Chong, Tony Q. S. Quek

Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.

Federated Learning

Personalized Federated Deep Reinforcement Learning-based Trajectory Optimization for Multi-UAV Assisted Edge Computing

no code implementations5 Sep 2023 Zhengrong Song, Chuan Ma, Ming Ding, Howard H. Yang, Yuwen Qian, Xiangwei Zhou

This work proposes a novel solution to address these challenges, namely personalized federated deep reinforcement learning (PF-DRL), for multi-UAV trajectory optimization.

Edge-computing Federated Learning +1

Edge Intelligence Over the Air: Two Faces of Interference in Federated Learning

no code implementations17 Jun 2023 Zihan Chen, Howard H. Yang, Tony Q. S. Quek

Federated edge learning is envisioned as the bedrock of enabling intelligence in next-generation wireless networks, but the limited spectral resources often constrain its scalability.

Federated Learning

DPP-based Client Selection for Federated Learning with Non-IID Data

no code implementations30 Mar 2023 Yuxuan Zhang, Chao Xu, Howard H. Yang, Xijun Wang, Tony Q. S. Quek

This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL) while concurrently coping with FL's data heterogeneity issue.

Federated Learning

Hierarchical Personalized Federated Learning Over Massive Mobile Edge Computing Networks

no code implementations19 Mar 2023 Chaoqun You, Kun Guo, Howard H. Yang, Tony Q. S. Quek

Personalized Federated Learning (PFL) is a new Federated Learning (FL) paradigm, particularly tackling the heterogeneity issues brought by various mobile user equipments (UEs) in mobile edge computing (MEC) networks.

Edge-computing Personalized Federated Learning +1

Personalizing Federated Learning with Over-the-Air Computations

no code implementations24 Feb 2023 Zihan Chen, Zeshen Li, Howard H. Yang, Tony Q. S. Quek

Additionally, we leverage a bi-level optimization framework to personalize the federated learning model so as to cope with the data heterogeneity issue.

Federated Learning Privacy Preserving

Semi-Synchronous Personalized Federated Learning over Mobile Edge Networks

no code implementations27 Sep 2022 Chaoqun You, Daquan Feng, Kun Guo, Howard H. Yang, Tony Q. S. Quek

Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss, in contrast to synchronous and asynchronous PFL algorithms.

Personalized Federated Learning Scheduling

Towards Federated Long-Tailed Learning

no code implementations30 Jun 2022 Zihan Chen, Songshang Liu, Hualiang Wang, Howard H. Yang, Tony Q. S. Quek, Zuozhu Liu

Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.

Federated Learning

Federated Stochastic Gradient Descent Begets Self-Induced Momentum

no code implementations17 Feb 2022 Howard H. Yang, Zuozhu Liu, Yaru Fu, Tony Q. S. Quek, H. Vincent Poor

Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems, in which a server and a host of clients collaboratively train a statistical model utilizing the data and computation resources of the clients without directly exposing their privacy-sensitive data.

Federated Learning

Optimizing the Long-Term Average Reward for Continuing MDPs: A Technical Report

no code implementations13 Apr 2021 Chao Xu, Yiping Xie, Xijun Wang, Howard H. Yang, Dusit Niyato, Tony Q. S. Quek

cost), by integrating R-learning, a tabular reinforcement learning (RL) algorithm tailored for maximizing the long-term average reward, and traditional DRL algorithms, initially developed to optimize the discounted long-term cumulative reward rather than the average one.

reinforcement-learning Reinforcement Learning (RL)

Multi-Armed Bandit Based Client Scheduling for Federated Learning

1 code implementation5 Jul 2020 Wenchao Xia, Tony Q. S. Quek, Kun Guo, Wanli Wen, Howard H. Yang, Hongbo Zhu

In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.

Federated Learning Scheduling

Federated Learning with Differential Privacy: Algorithms and Performance Analysis

no code implementations1 Nov 2019 Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farokhi Farhad, Shi Jin, Tony Q. S. Quek, H. Vincent Poor

Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i. e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level.

Federated Learning Privacy Preserving +1

Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks

no code implementations31 Oct 2019 Howard H. Yang, Ahmed Arafa, Tony Q. S. Quek, H. Vincent Poor

Federated learning (FL) is a machine learning model that preserves data privacy in the training process.

Information Theory Signal Processing Information Theory

Scheduling Policies for Federated Learning in Wireless Networks

no code implementations17 Aug 2019 Howard H. Yang, Zuozhu Liu, Tony Q. S. Quek, H. Vincent Poor

Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration.

Information Theory Signal Processing Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.