1 code implementation • 8 Oct 2024 • Wenhao Wang, Xiaoyu Liang, Rui Ye, Jingyi Chai, Siheng Chen, Yanfeng Wang
The success of large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data.
no code implementations • 11 Sep 2024 • Rui Ye, Rui Ge, Yuchi Fengting, Jingyi Chai, Yanfeng Wang, Siheng Chen
Federated instruction tuning enables multiple clients to collaboratively fine-tune a shared large language model (LLM) that can follow humans' instructions without directly sharing raw data.
no code implementations • 15 Jun 2024 • Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang, Siheng Chen
Federated learning (FL) enables multiple parties to collaboratively fine-tune an large language model (LLM) without the need of direct data sharing.
2 code implementations • 7 Jun 2024 • Rui Ye, Rui Ge, Xinyu Zhu, Jingyi Chai, Yaxin Du, Yang Liu, Yanfeng Wang, Siheng Chen
Addressing this, we propose FedLLM-Bench, which involves 8 training methods, 4 training datasets, and 6 evaluation metrics, to offer a comprehensive testbed for the FedLLM community.
no code implementations • 15 May 2024 • Zhuofu Tao, Yichen Shi, Yiru Huo, Rui Ye, Zonghang Li, Li Huang, Chen Wu, Na Bai, Zhiping Yu, Ting-Jung Lin, Lei He
Today's analog/mixed-signal (AMS) integrated circuit (IC) designs demand substantial manual intervention.
no code implementations • 11 Mar 2024 • Shuo Tang, Rui Ye, Chenxin Xu, Xiaowen Dong, Siheng Chen, Yanfeng Wang
In this paper, we propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
2 code implementations • 10 Feb 2024 • Rui Ye, Wenhao Wang, Jingyi Chai, Dihan Li, Zexi Li, Yinda Xu, Yaxin Du, Yanfeng Wang, Siheng Chen
Trained on massive publicly available data, large language models (LLMs) have demonstrated tremendous success across various fields.
no code implementations • 8 Feb 2024 • Xianghe Pang, Shuo Tang, Rui Ye, Yuxin Xiong, Bolun Zhang, Yanfeng Wang, Siheng Chen
Drawing from the sociological insight that acknowledging all parties' concerns is a key factor in shaping human values, this paper proposes a novel direction to align LLMs by themselves: social scene simulation.
1 code implementation • 23 Jan 2024 • Shaoheng Fang, Rui Ye, Wenhao Wang, Zuhong Liu, Yuxiao Wang, Yafei Wang, Siheng Chen, Yanfeng Wang
In this paper, we introduce FedRSU, an innovative federated learning framework for self-supervised scene flow estimation.
1 code implementation • 16 Jan 2024 • Kexin Lv, Rui Ye, Xiaolin Huang, Jie Yang, Siheng Chen
Personalized federated learning aims to address data heterogeneity across local clients in federated learning.
no code implementations • 30 Dec 2023 • Peihua Mai, Ran Yan, Rui Ye, Youjia Yang, Yinchuan Li, Yan Pang
In response, we present ConfusionPrompt, a novel private LLM inference framework designed to obfuscate the server by: (i) decomposing the prompt into sub-prompts, and (ii) generating pseudo prompts along with the genuine sub-prompts as input to the online LLM.
no code implementations • 10 Dec 2023 • Rui Ye, Yaxin Du, Zhenyang Ni, Siheng Chen, Yanfeng Wang
FedCOG consists of two key components at the client side: complementary data generation, which generates data extracted from the shared global model to complement the original dataset, and knowledge-distillation-based model training, which distills knowledge from global model to local model based on the generated data to mitigate over-fitting the original heterogeneous dataset.
no code implementations • 10 Dec 2023 • Rui Ye, Xinyu Zhu, Jingyi Chai, Siheng Chen, Yanfeng Wang
In this paper, we propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
1 code implementation • 30 May 2023 • Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, Yanfeng Wang
However, based on our empirical observations and theoretical analysis, we find that the dataset size is not optimal and the discrepancy between local and global category distributions could be a beneficial and complementary indicator for determining aggregation weights.
no code implementations • 29 Oct 2022 • Ziyu Shan, Qi Yang, Rui Ye, Yujie Zhang, Yiling Xu, Xiaozhong Xu, Shan Liu
To extract effective features for PCQA, we propose a new graph convolution kernel, i. e., GPAConv, which attentively captures the perturbation of structure and texture.
no code implementations • 14 Oct 2022 • Rui Ye, Zhenyang Ni, Chenxin Xu, Jianyu Wang, Siheng Chen, Yonina C. Eldar
This method attempts to mitigate the negative effects of data heterogeneity in FL by aligning each client's feature space.
1 code implementation • IEEE Transactions on Geoscience and Remote Sensing 2022 • Kuai Dai, Xutao Li, Yunming Ye, Shanshan Feng, Danyu Qin, Rui Ye
To address the sequential error accumulation issue, MSTCGAN adopts a parallel prediction framework to produce the future image sequences by a one-hot time condition input.
no code implementations • 3 Mar 2022 • Baoquan Zhang, Hao Jiang, Xutao Li, Shanshan Feng, Yunming Ye, Rui Ye
Then, resorting to the prior, we split each few-shot task to a set of subtasks with different concept levels and then perform class prediction via a model of decision tree.
no code implementations • 9 Oct 2021 • Baoquan Zhang, Shanshan Feng, Xutao Li, Yunming Ye, Rui Ye
In this framework, a scene graph construction module is carefully designed to represent each test remote sensing image or each scene class as a scene graph, where the nodes reflect these co-occurrence objects meanwhile the edges capture the spatial correlations between these co-occurrence objects.
1 code implementation • 3 Oct 2021 • Chuyao Luo, ZhengZhang, Rui Ye, Xutao Li, Yunming Ye
Natural disasters caused by heavy rainfall often cost huge loss of life and property.
1 code implementation • 26 Mar 2021 • Baoquan Zhang, Xutao Li, Shanshan Feng, Yunming Ye, Rui Ye
Although the existing meta-optimizers can also be adapted to our framework, they all overlook a crucial gradient bias issue, \emph{i. e.}, the mean-based gradient estimation is also biased on sparse data.