no code implementations • 14 Jan 2025 • MiniMax, Aonian Li, Bangwei Gong, Bo Yang, Boji Shan, Chang Liu, Cheng Zhu, Chunhao Zhang, Congchao Guo, Da Chen, Dong Li, Enwei Jiao, Gengxin Li, Guojun Zhang, Haohai Sun, Houze Dong, Jiadai Zhu, Jiaqi Zhuang, Jiayuan Song, Jin Zhu, Jingtao Han, Jingyang Li, Junbin Xie, Junhao Xu, Junjie Yan, Kaishun Zhang, Kecheng Xiao, Kexi Kang, Le Han, Leyang Wang, Lianfei Yu, Liheng Feng, Lin Zheng, Linbo Chai, Long Xing, Meizhi Ju, Mingyuan Chi, Mozhi Zhang, Peikai Huang, Pengcheng Niu, Pengfei Li, Pengyu Zhao, Qi Yang, Qidi Xu, Qiexiang Wang, Qin Wang, Qiuhui Li, Ruitao Leng, Shengmin Shi, Shuqi Yu, Sichen Li, Songquan Zhu, Tao Huang, Tianrun Liang, Weigao Sun, Weixuan Sun, Weiyu Cheng, Wenkai Li, Xiangjun Song, Xiao Su, Xiaodong Han, Xinjie Zhang, Xinzhu Hou, Xu Min, Xun Zou, Xuyang Shen, Yan Gong, Yingjie Zhu, Yipeng Zhou, Yiran Zhong, Yongyi Hu, Yuanxiang Fan, Yue Yu, Yufeng Yang, Yuhao Li, Yunan Huang, Yunji Li, Yunpeng Huang, Yunzhi Xu, Yuxin Mao, Zehan Li, Zekang Li, Zewei Tao, Zewen Ying, Zhaoyang Cong, Zhen Qin, Zhenhua Fan, Zhihang Yu, Zhuo Jiang, Zijia Wu
This approach enables us to conduct efficient training and inference on models with hundreds of billions of parameters across contexts spanning millions of tokens.
no code implementations • 4 Dec 2024 • Xianzhi Zhang, Yipeng Zhou, Miao Hu, Di wu, Pengshan Liao, Mohsen Guizani, Michael Sheng
To mitigate the rising concern about privacy leakage, the federated recommender (FR) paradigm emerges, in which decentralized clients co-train the recommendation model without exposing their raw user-item rating data.
no code implementations • 18 Oct 2024 • Min Wen, Chengchang Liu, Ahmed Abdelmoniem, Yipeng Zhou, Yuedong Xu
Bilevel optimization, crucial for hyperparameter tuning, meta-learning and reinforcement learning, remains less explored in the decentralized learning paradigm, such as decentralized federated learning (DFL).
no code implementations • 3 Sep 2024 • Nico Uhlemann, Yipeng Zhou, Tobias Simeon Mohr, Markus Lienkamp
This paper explores pedestrian trajectory prediction in urban traffic while focusing on both model accuracy and real-world applicability.
no code implementations • 18 Aug 2024 • Huitong Jin, Yipeng Zhou, Laizhong Cui, Quan Z. Sheng
Inspired by these advantages, we are the first to explore how model pre-training can mitigate noise detriment in differentially private federated learning (DPFL).
no code implementations • 16 Aug 2024 • Jiating Ma, Yipeng Zhou, Qi Li, Quan Z. Sheng, Laizhong Cui, Jiangchuan Liu
Based on convergence analysis, we formulate the client selection problem to minimize the value of loss function in DPFL with heterogeneous privacy, which is a convex optimization problem and can be solved efficiently.
no code implementations • 6 Feb 2024 • Xiaoxin Su, Yipeng Zhou, Laizhong Cui, John C. S. Lui, Jiangchuan Liu
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds, without touching private data owned by individual clients.
no code implementations • 6 Feb 2024 • Xiaoxin Su, Yipeng Zhou, Laizhong Cui, Song Guo
Recently, federated learning (FL) has gained momentum because of its capability in preserving data privacy.
1 code implementation • 25 May 2023 • Jiahao Tan, Yipeng Zhou, Gang Liu, Jessie Hui Wang, Shui Yu
More specifically, we decouple a NN model into a personalized feature extractor, obtained by aggregating models from similar clients, and a classifier, which is obtained by local training and used to estimate client similarity.
no code implementations • 14 May 2023 • Behnaz Soltani, Yipeng Zhou, Venus Haghighi, John C. S. Lui
In traditional machine learning, it is trivial to conduct model evaluation since all data samples are managed centrally by a server.
1 code implementation • 10 May 2023 • Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di wu
In this paper, we propose a new PFL algorithm called \emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address the above problem, which leverages the parameter server (PS) to compute personalized aggregation weights based on collected models from clients.
no code implementations • 9 May 2023 • Yunchao Yang, Yipeng Zhou, Miao Hu, Di wu, Quan Z. Sheng
The challenge of this problem lies in the opaque feedback between reward budget allocation and model utility improvement of FL, making the optimal reward budget allocation complicated.
no code implementations • 25 Mar 2023 • Miao Hu, Zhenxiao Luo, Amirmohammad Pasdar, Young Choon Lee, Yipeng Zhou, Di wu
Edge computing has been getting a momentum with ever-increasing data at the edge of the network.
no code implementations • 30 Dec 2022 • Wan Jiang, Gang Liu, Xiaofeng Chen, Yipeng Zhou
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning.
no code implementations • 5 Sep 2022 • Dongyuan Su, Yipeng Zhou, Laizhong Cui
To boost the convergence of DFL, a vehicle tunes the aggregation weight of each data source by minimizing the KL divergence of its state vector, and its effectiveness in diversifying data sources can be theoretically proved.
no code implementations • 12 Aug 2022 • Laizhong Cui, Xiaoxin Su, Yipeng Zhou
Recently, blockchain-based federated learning (BFL) has attracted intensive research attention due to that the training process is auditable and the architecture is serverless avoiding the single point failure of the parameter server in vanilla federated learning (VFL).
no code implementations • 13 Dec 2021 • Laizhong Cui, Xiaoxin Su, Yipeng Zhou, Jiangchuan Liu
Federated Learning (FL) incurs high communication overhead, which can be greatly alleviated by compression for model updates.
1 code implementation • 5 Jul 2021 • Yipeng Zhou, Xuezheng Liu, Yao Fu, Di wu, Chao Li, Shui Yu
In this work, we study a crucial question which has been vastly overlooked by existing works: what are the optimal numbers of queries and replies in FL with DP so that the final model accuracy is maximized.
no code implementations • 10 May 2021 • Laizhong Cui, Xiaoxin Su, Yipeng Zhou, Yi Pan
Then, we further propose the boosted MUCSC (B-MUCSC) algorithm, a biased compression algorithm that can achieve an extremely high compression rate by grouping insignificant model updates into a super cluster.
1 code implementation • 20 Apr 2021 • Jiwei Guan, Xi Zheng, Chen Wang, Yipeng Zhou, Alireza Jolfa
This technology enables drivers to use voice commands to control the vehicle and will be soon available in Advanced Driver Assistance Systems (ADAS).
no code implementations • 11 Mar 2021 • Miao Hu, Xianzhuo Luo, Jiawen Chen, Young Choon Lee, Yipeng Zhou, Di wu
Virtual Reality (VR) has shown great potential to revolutionize the market by providing users immersive experiences with freedom of movement.
Networking and Internet Architecture
no code implementations • 11 Jan 2021 • Yao Fu, Yipeng Zhou, Di wu, Shui Yu, Yonggang Wen, Chao Li
Then, we theoretically derive: 1) the conditions for the DP based FedAvg to converge as the number of global iterations (GI) approaches infinity; 2) the method to set the number of local iterations (LI) to minimize the negative influence of DP noises.
no code implementations • 20 Oct 2020 • Yupeng Jiang, Yong Li, Yipeng Zhou, Xi Zheng
The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy.
Cryptography and Security Distributed, Parallel, and Cluster Computing