no code implementations • 15 Nov 2024 • Zhichen Zeng, Xiaolong Liu, Mengyue Hang, Xiaoyi Liu, Qinghai Zhou, Chaofei Yang, Yiqun Liu, Yichen Ruan, Laming Chen, Yuxin Chen, Yujia Hao, Jiaqi Xu, Jade Nie, Xi Liu, Buyun Zhang, Wei Wen, Siyang Yuan, Kai Wang, Wen-Yen Chen, Yiping Han, Huayu Li, Chunzhi Yang, Bo Long, Philip S. Yu, Hanghang Tong, Jiyan Yang
A mutually beneficial integration of heterogeneous information is the cornerstone towards the success of CTR prediction.
no code implementations • 22 Apr 2024 • Marie Siew, Haoran Zhang, Jong-Ik Park, Yuezhou Liu, Yichen Ruan, Lili Su, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong
We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets.
no code implementations • 11 Dec 2021 • Yichen Ruan, Carlee Joe-Wong
Traditionally, clustered federated learning groups clients with the same data distribution into a cluster, so that every client is uniquely associated with one data distribution and helps train a model for this distribution.
no code implementations • 12 Jun 2020 • Yichen Ruan, Xiaoxi Zhang, Shu-Che Liang, Carlee Joe-Wong
Traditional federated learning algorithms impose strict requirements on the participation rates of devices, which limit the potential reach of federated learning.
no code implementations • 17 Apr 2020 • Yuwei Tu, Yichen Ruan, Su Wang, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong
Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points.
Distributed, Parallel, and Cluster Computing