Search Results for author: Yunfeng Shao

Found 31 papers, 4 papers with code

MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes

no code implementations14 Apr 2024 Xin-Chun Li, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, Yang Yang, De-Chuan Zhan

For better model personalization, we point out that the hard-won personalized models are not well exploited and propose "inherited private model" to store the personalization experience.

Federated Learning

ECLM: Efficient Edge-Cloud Collaborative Learning with Continuous Environment Adaptation

no code implementations18 Nov 2023 Yan Zhuang, Zhenzhe Zheng, Yunfeng Shao, Bingshuai Li, Fan Wu, Guihai Chen

In this paper, we propose ECLM, an edge-cloud collaborative learning framework for rapid model adaptation for dynamic edge environments.

Understanding Prompt Tuning for V-L Models Through the Lens of Neural Collapse

no code implementations28 Jun 2023 Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu

It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.

GFlowNets with Human Feedback

no code implementations11 May 2023 Yinchuan Li, Shuang Luo, Yunfeng Shao, Jianye Hao

We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve the exploration ability when training AI models.

Generalized Universal Domain Adaptation with Generative Flow Networks

no code implementations8 May 2023 Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, Chao Wu

We introduce a new problem in unsupervised domain adaptation, termed as Generalized Universal Domain Adaptation (GUDA), which aims to achieve precise prediction of all target labels including unknown categories.

Universal Domain Adaptation Unsupervised Domain Adaptation

Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs

no code implementations24 Apr 2023 Yinchuan Li, Zhigang Li, Wenqian Li, Yunfeng Shao, Yan Zheng, Jianye Hao

Many score-based active learning methods have been successfully applied to graph-structured data, aiming to reduce the number of labels and achieve better performance of graph neural networks based on predefined score functions.

Active Learning

Multi-agent Policy Reciprocity with Theoretical Guarantee

no code implementations12 Apr 2023 Haozhi Wang, Yinchuan Li, Qing Wang, Yunfeng Shao, Jianye Hao

We then define an adjacency space for mismatched states and design a plug-and-play module for value iteration, which enables agents to infer more precise returns.

Continuous Control Multi-agent Reinforcement Learning +1

Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering

no code implementations8 Mar 2023 Xu Zhang, Wenpeng Li, Yunfeng Shao, Yinchuan Li

data, we propose a clustered Bayesian FL model named cFedbayes by learning different prior distributions for different clients.

Bayesian Inference Clustering +1

Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief

3 code implementations13 Oct 2022 Kaiyang Guo, Yunfeng Shao, Yanhui Geng

To make practical, we further devise an offline RL algorithm to approximately find the solution.

 Ranked #1 on D4RL on D4RL

D4RL Offline RL +2

Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again

no code implementations10 Oct 2022 Xin-Chun Li, Wen-Shu Fan, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, De-Chuan Zhan

Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities.

Knowledge Distillation

On the Convergence Theory of Meta Reinforcement Learning with Personalized Policies

no code implementations21 Sep 2022 Haozhi Wang, Qing Wang, Yunfeng Shao, Dong Li, Jianye Hao, Yinchuan Li

Modern meta-reinforcement learning (Meta-RL) methods are mainly developed based on model-agnostic meta-learning, which performs policy gradient steps across tasks to maximize policy performance.

Continuous Control Meta-Learning +3

To Store or Not? Online Data Selection for Federated Learning with Limited Storage

no code implementations1 Sep 2022 Chen Gong, Zhenzhe Zheng, Yunfeng Shao, Bingshuai Li, Fan Wu, Guihai Chen

We first define a new data valuation metric for data evaluation and selection in FL with theoretical guarantees for speeding up model convergence and enhancing final model accuracy, simultaneously.

Data Valuation Federated Learning +4

Tensor Decomposition based Personalized Federated Learning

no code implementations27 Aug 2022 Qing Wang, Jing Jin, Xiaofeng Liu, Huixuan Zong, Yunfeng Shao, Yinchuan Li

Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.

Model Optimization Personalized Federated Learning +1

DP$^2$-VAE: Differentially Private Pre-trained Variational Autoencoders

no code implementations5 Aug 2022 Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu

Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.

S2RL: Do We Really Need to Perceive All States in Deep Multi-Agent Reinforcement Learning?

no code implementations20 Jun 2022 Shuang Luo, Yinchuan Li, Jiahui Li, Kun Kuang, Furui Liu, Yunfeng Shao, Chao Wu

To this end, we propose a sparse state based MARL (S2RL) framework, which utilizes a sparse attention mechanism to discard irrelevant information in local observations.

Multi-agent Reinforcement Learning Reinforcement Learning (RL) +2

Personalized Federated Learning via Variational Bayesian Inference

1 code implementation16 Jun 2022 Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, Yunfeng Shao

Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients.

Bayesian Inference Personalized Federated Learning +1

Sparse Federated Learning with Hierarchical Personalized Models

no code implementations25 Mar 2022 Xiaofeng Liu, Qing Wang, Yunfeng Shao, Yinchuan Li

To this end, we propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP), which significantly improves the global model performance facing diverse data.

Autonomous Vehicles Federated Learning

Towards Effective Clustered Federated Learning: A Peer-to-peer Framework with Adaptive Neighbor Matching

no code implementations23 Mar 2022 Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang, Chao Wu

In the literature, centralized clustered FL algorithms require the assumption of the number of clusters and hence are not effective enough to explore the latent relationships among clients.

Federated Learning

How global observation works in Federated Learning: Integrating vertical training into Horizontal Federated Learning

no code implementations2 Dec 2021 Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng, Khaled B. Letaief

In this paper, we develop a vertical-horizontal federated learning (VHFL) process, where the global feature is shared with the agents in a procedure similar to that of vertical FL without any extra communication rounds.

Federated Learning

Unified Group Fairness on Federated Learning

no code implementations9 Nov 2021 Fengda Zhang, Kun Kuang, Yuxuan Liu, Long Chen, Chao Wu, Fei Wu, Jiaxun Lu, Yunfeng Shao, Jun Xiao

We validate the advantages of the FMDA-M algorithm with various kinds of distribution shift settings in experiments, and the results show that FMDA-M algorithm outperforms the existing fair FL algorithms on unified group fairness.

Attribute Fairness +1

Preliminary Steps Towards Federated Sentiment Classification

no code implementations26 Jul 2021 Xin-Chun Li, Lan Li, De-Chuan Zhan, Yunfeng Shao, Bingshuai Li, Shaoming Song

Automatically mining sentiment tendency contained in natural language is a fundamental research to some artificial intelligent applications, where solutions alternate with challenges.

Classification Dimensionality Reduction +4

Domain Adaptation without Model Transferring

no code implementations21 Jul 2021 Kunhong Wu, Yucheng Shi, Yahong Han, Yunfeng Shao, Bingshuai Li, Qi Tian

Existing unsupervised domain adaptation (UDA) methods can achieve promising performance without transferring data from source domain to target domain.

Unsupervised Domain Adaptation

Structured Directional Pruning via Perturbation Orthogonal Projection

no code implementations12 Jul 2021 Yinchuan Li, Xiaofeng Liu, Yunfeng Shao, Qing Wang, Yanhui Geng

Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss.

Sparse Personalized Federated Learning

no code implementations12 Jul 2021 Xiaofeng Liu, Yinchuan Li, Qing Wang, Xu Zhang, Yunfeng Shao, Yanhui Geng

By incorporating an approximated L1-norm and the correlation between client models and global model into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with non-sparse FL.

Personalized Federated Learning

Convergence Analysis and System Design for Federated Learning over Wireless Networks

no code implementations30 Apr 2021 Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng, Khaled B. Letaief

Federated learning (FL) has recently emerged as an important and promising learning scheme in IoT, enabling devices to jointly learn a model without sharing their raw data sets.

Federated Learning Scheduling

Loosely Coupled Federated Learning Over Generative Models

no code implementations28 Sep 2020 Shaoming Song, Yunfeng Shao, Jian Li

This paper proposes Loosely Coupled Federated Learning (LC-FL), a framework using generative models as transmission media to achieve low communication cost and heterogeneous federated learning.

BIG-bench Machine Learning Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.