no code implementations • 20 Oct 2023 • Weijie Liu, Xiaoxi Zhang, Jingpu Duan, Carlee Joe-Wong, Zhi Zhou, Xu Chen
Federated Learning (FL) is a distributed learning paradigm that can coordinate heterogeneous edge devices to perform model training without sharing private data.
no code implementations • 31 Aug 2023 • Zhiying Feng, Xu Chen, Qiong Wu, Wen Wu, Xiaoxi Zhang, Qianyi Huang
FedDD consists of two key modules: dropout rate allocation and uploaded parameter selection, which will optimize the model parameter uploading ratios tailored to different clients' heterogeneous conditions and also select the proper set of important model parameters for uploading subject to clients' dropout rate constraints.
no code implementations • 19 Jul 2023 • Liekang Zeng, Haowei Chen, Daipeng Feng, Xiaoxi Zhang, Xu Chen
Accurate navigation is of paramount importance to ensure flight safety and efficiency for autonomous drones.
no code implementations • 4 Jul 2023 • Liekang Zeng, Xu Chen, Peng Huang, Ke Luo, Xiaoxi Zhang, Zhi Zhou
Graph Neural Networks (GNNs) have gained growing interest in miscellaneous applications owing to their outstanding ability in extracting latent representation on graph structures.
no code implementations • 22 Apr 2023 • Huirong Ma, Zhi Zhou, Xiaoxi Zhang, Xu Chen
Provisioning dynamic machine learning (ML) inference as a service for artificial intelligence (AI) applications of edge devices faces many challenges, including the trade-off among accuracy loss, carbon emission, and unknown future costs.
no code implementations • 16 Jan 2023 • Qiong Wu, Xu Chen, Tao Ouyang, Zhi Zhou, Xiaoxi Zhang, Shusen Yang, Junshan Zhang
Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients while keeping the training data locally.
no code implementations • 12 Jun 2020 • Yichen Ruan, Xiaoxi Zhang, Shu-Che Liang, Carlee Joe-Wong
Traditional federated learning algorithms impose strict requirements on the participation rates of devices, which limit the potential reach of federated learning.
no code implementations • 12 Mar 2020 • Xiaoxi Zhang, Jian-Yu Wang, Gauri Joshi, Carlee Joe-Wong
Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes.
no code implementations • 21 Nov 2019 • Jinhang Zuo, Xiaoxi Zhang, Carlee Joe-Wong
We consider the stochastic multi-armed bandit (MAB) problem in a setting where a player can pay to pre-observe arm rewards before playing an arm in each round.