no code implementations • 14 Apr 2024 • Xin-Chun Li, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, Yang Yang, De-Chuan Zhan
For better model personalization, we point out that the hard-won personalized models are not well exploited and propose "inherited private model" to store the personalization experience.
no code implementations • 18 Nov 2023 • Yan Zhuang, Zhenzhe Zheng, Yunfeng Shao, Bingshuai Li, Fan Wu, Guihai Chen
In this paper, we propose ECLM, an edge-cloud collaborative learning framework for rapid model adaptation for dynamic edge environments.
no code implementations • 28 Jun 2023 • Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu
It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.
no code implementations • 11 May 2023 • Yinchuan Li, Shuang Luo, Yunfeng Shao, Jianye Hao
We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve the exploration ability when training AI models.
no code implementations • 8 May 2023 • Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, Chao Wu
We introduce a new problem in unsupervised domain adaptation, termed as Generalized Universal Domain Adaptation (GUDA), which aims to achieve precise prediction of all target labels including unknown categories.
no code implementations • 24 Apr 2023 • Yinchuan Li, Zhigang Li, Wenqian Li, Yunfeng Shao, Yan Zheng, Jianye Hao
Many score-based active learning methods have been successfully applied to graph-structured data, aiming to reduce the number of labels and achieve better performance of graph neural networks based on predefined score functions.
no code implementations • 12 Apr 2023 • Haozhi Wang, Yinchuan Li, Qing Wang, Yunfeng Shao, Jianye Hao
We then define an adjacency space for mismatched states and design a plug-and-play module for value iteration, which enables agents to infer more precise returns.
no code implementations • 8 Mar 2023 • Xu Zhang, Wenpeng Li, Yunfeng Shao, Yinchuan Li
data, we propose a clustered Bayesian FL model named cFedbayes by learning different prior distributions for different clients.
2 code implementations • 15 Oct 2022 • Wenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, Yan Pang
Causal discovery aims to uncover causal structure among a set of variables.
3 code implementations • 13 Oct 2022 • Kaiyang Guo, Yunfeng Shao, Yanhui Geng
To make practical, we further devise an offline RL algorithm to approximately find the solution.
Ranked #1 on D4RL on D4RL
no code implementations • 10 Oct 2022 • Xin-Chun Li, Wen-Shu Fan, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, De-Chuan Zhan
Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities.
no code implementations • 21 Sep 2022 • Haozhi Wang, Qing Wang, Yunfeng Shao, Dong Li, Jianye Hao, Yinchuan Li
Modern meta-reinforcement learning (Meta-RL) methods are mainly developed based on model-agnostic meta-learning, which performs policy gradient steps across tasks to maximize policy performance.
no code implementations • 1 Sep 2022 • Chen Gong, Zhenzhe Zheng, Yunfeng Shao, Bingshuai Li, Fan Wu, Guihai Chen
We first define a new data valuation metric for data evaluation and selection in FL with theoretical guarantees for speeding up model convergence and enhancing final model accuracy, simultaneously.
no code implementations • 27 Aug 2022 • Qing Wang, Jing Jin, Xiaofeng Liu, Huixuan Zong, Yunfeng Shao, Yinchuan Li
Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.
no code implementations • 5 Aug 2022 • Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu
Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.
no code implementations • 20 Jun 2022 • Shuang Luo, Yinchuan Li, Jiahui Li, Kun Kuang, Furui Liu, Yunfeng Shao, Chao Wu
To this end, we propose a sparse state based MARL (S2RL) framework, which utilizes a sparse attention mechanism to discard irrelevant information in local observations.
Multi-agent Reinforcement Learning Reinforcement Learning (RL) +2
1 code implementation • 17 Jun 2022 • Xin-Chun Li, Jin-Lin Tang, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, Le Gan, De-Chuan Zhan
Federated KWS (FedKWS) could serve as a solution without directly sharing users' data.
1 code implementation • 16 Jun 2022 • Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, Yunfeng Shao
Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients.
no code implementations • CVPR 2022 • Xin-Chun Li, Yi-Chu Xu, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, De-Chuan Zhan
The permutation invariance property of neural networks and the non-i. i. d.
no code implementations • 25 Mar 2022 • Xiaofeng Liu, Qing Wang, Yunfeng Shao, Yinchuan Li
To this end, we propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP), which significantly improves the global model performance facing diverse data.
no code implementations • 23 Mar 2022 • Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang, Chao Wu
In the literature, centralized clustered FL algorithms require the assumption of the number of clusters and hence are not effective enough to explore the latent relationships among clients.
no code implementations • 2 Dec 2021 • Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng, Khaled B. Letaief
In this paper, we develop a vertical-horizontal federated learning (VHFL) process, where the global feature is shared with the agents in a procedure similar to that of vertical FL without any extra communication rounds.
no code implementations • 9 Nov 2021 • Fengda Zhang, Kun Kuang, Yuxuan Liu, Long Chen, Chao Wu, Fei Wu, Jiaxun Lu, Yunfeng Shao, Jun Xiao
We validate the advantages of the FMDA-M algorithm with various kinds of distribution shift settings in experiments, and the results show that FMDA-M algorithm outperforms the existing fair FL algorithms on unified group fairness.
no code implementations • 26 Jul 2021 • Xin-Chun Li, Lan Li, De-Chuan Zhan, Yunfeng Shao, Bingshuai Li, Shaoming Song
Automatically mining sentiment tendency contained in natural language is a fundamental research to some artificial intelligent applications, where solutions alternate with challenges.
no code implementations • 26 Jul 2021 • Xin-Chun Li, Le Gan, De-Chuan Zhan, Yunfeng Shao, Bingshuai Li, Shaoming Song
We advocate the proposed methods could serve as a preliminary try to explore where to privatize for a novel non-iid scene.
no code implementations • 21 Jul 2021 • Kunhong Wu, Yucheng Shi, Yahong Han, Yunfeng Shao, Bingshuai Li, Qi Tian
Existing unsupervised domain adaptation (UDA) methods can achieve promising performance without transferring data from source domain to target domain.
no code implementations • 12 Jul 2021 • Xiaofeng Liu, Yinchuan Li, Qing Wang, Xu Zhang, Yunfeng Shao, Yanhui Geng
By incorporating an approximated L1-norm and the correlation between client models and global model into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with non-sparse FL.
no code implementations • 12 Jul 2021 • Yinchuan Li, Xiaofeng Liu, Yunfeng Shao, Qing Wang, Yanhui Geng
Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss.
no code implementations • 30 Apr 2021 • Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng, Khaled B. Letaief
Federated learning (FL) has recently emerged as an important and promising learning scheme in IoT, enabling devices to jointly learn a model without sharing their raw data sets.
no code implementations • 27 Apr 2021 • Yuandu Lai, Yucheng Shi, Yahong Han, Yunfeng Shao, Meiyu Qi, Bingshuai Li
In this paper, We explore the uncertainty in deep learning to construct the prediction intervals.
no code implementations • 28 Sep 2020 • Shaoming Song, Yunfeng Shao, Jian Li
This paper proposes Loosely Coupled Federated Learning (LC-FL), a framework using generative models as transmission media to achieve low communication cost and heterogeneous federated learning.