Search Results for author: Yuzhu Mao

Found 3 papers, 0 papers with code

FL-TAC: Enhanced Fine-Tuning in Federated Learning via Low-Rank, Task-Specific Adapter Clustering

no code implementations23 Apr 2024 Siqi Ping, Yuzhu Mao, Yang Liu, Xiao-Ping Zhang, Wenbo Ding

Although large-scale pre-trained models hold great potential for adapting to downstream tasks through fine-tuning, the performance of such fine-tuned models is often limited by the difficulty of collecting sufficient high-quality, task-specific data.

Clustering Federated Learning

AQUILA: Communication Efficient Federated Learning with Adaptive Quantization in Device Selection Strategy

no code implementations1 Aug 2023 Zihao Zhao, Yuzhu Mao, Zhenpeng Shi, Yang Liu, Tian Lan, Wenbo Ding, Xiao-Ping Zhang

In response, this paper introduces AQUILA (adaptive quantization in device selection strategy), a novel adaptive framework devised to effectively handle these issues, enhancing the efficiency and robustness of FL.

Federated Learning Privacy Preserving +1

SAFARI: Sparsity enabled Federated Learning with Limited and Unreliable Communications

no code implementations5 Apr 2022 Yuzhu Mao, Zihao Zhao, Meilin Yang, Le Liang, Yang Liu, Wenbo Ding, Tian Lan, Xiao-Ping Zhang

It is demonstrated that SAFARI under unreliable communications is guaranteed to converge at the same rate as the standard FedAvg with perfect communications.

Federated Learning Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.