no code implementations • 16 Nov 2023 • Hongda Wu, Ping Wang, C V Aswartha Narayana
Federated Learning (FL) enables many resource-limited devices to train a model collaboratively without data sharing.
no code implementations • 16 Mar 2022 • Hongda Wu, Ali Nasehzadeh, Ping Wang
In this work, we propose a DRL-based caching scheme that improves the cache hit rate and reduces energy consumption of the IoT networks, in the meanwhile, taking data freshness and limited lifetime of IoT data into account.
1 code implementation • 14 May 2021 • Hongda Wu, Ping Wang
In this paper, we proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round, by identifying and excluding the adverse local updates via checking the relationship between the local gradient and the global gradient.
no code implementations • 1 Dec 2020 • Hongda Wu, Ping Wang
With extensive experiments performed in Pytorch and PySyft, we show that FL training with FedAdp can reduce the number of communication rounds by up to 54. 1% on MNIST dataset and up to 45. 4% on FashionMNIST dataset, as compared to FedAvg algorithm.