Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.
Adversarial Training (AT) has been proven to be an effective method of introducing strong adversarial robustness into deep neural networks.
To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario.
In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.
In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
In addition, we also theoretically prove that optimizing low-level skills with this auxiliary reward will increase the task return for the joint policy.