no code implementations • 3 Nov 2024 • Nicolás Astorga, Tennison Liu, Yuanzhang Xiao, Mihaela van der Schaar
We identify the three core challenges of autoformulation: (1) defining the vast, problem-dependent hypothesis space, (2) efficiently searching this space under uncertainty, and (3) evaluating formulation correctness (ensuring a formulation accurately represents the problem).
no code implementations • 2 Feb 2024 • Guangfeng Yan, Tan Li, Yuanzhang Xiao, Hanxu Hou, Linqi Song
We consider a general family of heavy-tail gradients that follow a power-law distribution, we aim to minimize the error resulting from quantization, thereby determining optimal values for two critical parameters: the truncation threshold and the quantization density.
no code implementations • 2 Feb 2024 • Guangfeng Yan, Tan Li, Yuanzhang Xiao, Congduan Li, Linqi Song
To address the communication bottleneck challenge in distributed learning, our work introduces a novel two-stage quantization strategy designed to enhance the communication efficiency of distributed Stochastic Gradient Descent (SGD).
no code implementations • 25 Jan 2024 • Sichun Luo, Yuxuan Yao, Bowei He, Yinya Huang, Aojun Zhou, Xinyi Zhang, Yuanzhang Xiao, Mingjie Zhan, Linqi Song
Conventional recommendation methods have achieved notable advancements by harnessing collaborative or sequential information from user behavior.
1 code implementation • 26 Dec 2023 • Sichun Luo, Bowei He, Haohan Zhao, Wei Shao, Yanlin Qi, Yinya Huang, Aojun Zhou, Yuxuan Yao, Zongpeng Li, Yuanzhang Xiao, Mingjie Zhan, Linqi Song
Large Language Models (LLMs) have demonstrated remarkable capabilities and have been extensively deployed across various domains, including recommender systems.
no code implementations • 30 May 2023 • Zhiheng Guo, Yuanzhang Xiao, Xiang Chen
In this paper, an unsupervised deep learning framework based on dual-path model-driven variational auto-encoders (VAE) is proposed for angle-of-arrivals (AoAs) and channel estimation in massive MIMO systems.
no code implementations • 11 May 2023 • Sichun Luo, Yuanzhang Xiao, Xinyi Zhang, Yang Liu, Wenbo Ding, Linqi Song
Each user learns a personalized model by combining the global federated model, the cluster-level federated model, and its own fine-tuned local model.
no code implementations • 24 Oct 2022 • Mengzhe Ruan, Guangfeng Yan, Yuanzhang Xiao, Linqi Song, Weitao Xu
This paper proposes a novel adaptive Top-K in SGD framework that enables an adaptive degree of sparsification for each gradient descent step to optimize the convergence performance by balancing the trade-off between communication cost and convergence error.
no code implementations • 23 Aug 2022 • Sichun Luo, Yuanzhang Xiao, Yang Liu, Congduan Li, Linqi Song
Federated recommendations leverage the federated learning (FL) techniques to make privacy-preserving recommendations.
no code implementations • 20 Aug 2022 • Sichun Luo, Xinyi Zhang, Yuanzhang Xiao, Linqi Song
For example, in a mobile game recommendation, contextual features like locations, battery, and storage levels of mobile devices are frequently drifting over time.
no code implementations • 19 Aug 2022 • Sichun Luo, Yuanzhang Xiao, Linqi Song
In this paper, we propose a Graph Neural Network based Personalized Federated Recommendation (PerFedRec) framework via joint representation learning, user clustering, and model adaptation.