2 code implementations • 24 May 2022 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
Based on these insights, we propose three optimization approaches: (1) We adopt knowledge distillation to facilitate the convergence of FedReID by better transferring knowledge from clients to the server; (2) We introduce client clustering to improve the performance of large datasets by aggregating clients with similar data distributions; (3) We propose cosine distance weight to elevate performance by dynamically updating the weights for aggregation depending on how well models are trained in clients.
no code implementations • 9 Apr 2022 • Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi
To address this problem, we propose federated unsupervised domain adaptation for face recognition, FedFR.
1 code implementation • ICCV 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang, Shuai Yi
In this framework, each party trains models from unlabeled data independently using contrastive learning with an online network and a target network.
no code implementations • 17 May 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi
To this end, FedFR forms an end-to-end training pipeline: (1) pre-train in the source domain; (2) predict pseudo labels by clustering in the target domain; (3) conduct domain-constrained federated learning across two domains.
1 code implementation • 17 May 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
However, these platforms are complex to use and require a deep understanding of FL, which imposes high barriers to entry for beginners, limits the productivity of researchers, and compromises deployment efficiency.
2 code implementations • 26 Aug 2020 • Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin, Dongzhan Zhou, Shuai Zhang, Shuai Yi
Then we propose two optimization methods: (1) To address the unbalanced weight problem, we propose a new method to dynamically change the weights according to the scale of model changes in clients in each training round; (2) To facilitate convergence, we adopt knowledge distillation to refine the server model with knowledge generated from client models on a public dataset.