Search Results for author: Minxue Tang

Found 6 papers, 2 papers with code

Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction

no code implementations30 Sep 2022 Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li

Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.

Federated Learning Privacy Preserving

FADE: Enabling Federated Adversarial Training on Heterogeneous Resource-Constrained Edge Devices

no code implementations8 Sep 2022 Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen

However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices.

Adversarial Robustness Federated Learning +1

Towards Collaborative Intelligence: Routability Estimation based on Decentralized Private Data

no code implementations30 Mar 2022 Jingyu Pan, Chen-Chia Chang, Zhiyao Xie, Ang Li, Minxue Tang, Tunhou Zhang, Jiang Hu, Yiran Chen

To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario.

Federated Learning

FedCor: Correlation-Based Active Client Selection Strategy for Heterogeneous Federated Learning

no code implementations CVPR 2022 Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen

In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.

Federated Learning

Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification

1 code implementation20 Apr 2020 Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen

In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.

Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards

1 code implementation NeurIPS 2019 Siyuan Li, Rui Wang, Minxue Tang, Chongjie Zhang

In addition, we also theoretically prove that optimizing low-level skills with this auxiliary reward will increase the task return for the joint policy.

Hierarchical Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.