no code implementations • 21 Oct 2023 • Peichun Li, Hanwen Zhang, Yuan Wu, LiPing Qian, Rong Yu, Dusit Niyato, Xuemin Shen
Distributed Artificial Intelligence (AI) model training over mobile edge networks encounters significant challenges due to the data and resource heterogeneity of edge devices.
no code implementations • 14 Jul 2023 • Xumin Huang, Peichun Li, Hongyang Du, Jiawen Kang, Dusit Niyato, Dong In Kim, Yuan Wu
Artificial intelligence generated content (AIGC) has emerged as a promising technology to improve the efficiency, quality, diversity and flexibility of the content creation process by adopting a variety of generative AI models.
no code implementations • 8 Jan 2023 • Peichun Li, Guoliang Cheng, Xumin Huang, Jiawen Kang, Rong Yu, Yuan Wu, Miao Pan
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints.
no code implementations • 11 Nov 2021 • Peichun Li, Xumin Huang, Miao Pan, Rong Yu
Federated learning (FL) enables devices in mobile edge computing (MEC) to collaboratively train a shared model without uploading the local data.
no code implementations • 19 Oct 2021 • Xumin Huang, Peichun Li, Rong Yu, Yuan Wu, Kan Xie, Shengli Xie
In PVEC, different PLOs recruit PVs as edge computing nodes for offloading services through an incentive mechanism, which is designed according to the computation demand and parking capacity constraints derived from FedParking.