1 code implementation • 23 Mar 2024 • Lijie Xu, Chulin Xie, Yiran Guo, Gustavo Alonso, Bo Li, Guoliang Li, Wei Wang, Wentao Wu, Ce Zhang
In this paper, we formalize this problem as relational federated learning (RFL).
1 code implementation • 12 Jun 2022 • Lijie Xu, Shuang Qiu, Binhang Yuan, Jiawei Jiang, Cedric Renggli, Shaoduo Gan, Kaan Kara, Guoliang Li, Ji Liu, Wentao Wu, Jieping Ye, Ce Zhang
In this paper, we first conduct a systematic empirical study on existing data shuffling strategies, which reveals that all existing strategies have room for improvement -- they all suffer in terms of I/O performance or convergence rate.
1 code implementation • 16 Apr 2021 • Shijian Li, Oren Mangoubi, Lijie Xu, Tian Guo
Further, we observe that Sync-Switch achieves 3. 8% higher converged accuracy with just 1. 23X the training time compared to training with ASP.
no code implementations • 28 Feb 2019 • Shijian Li, Robert J. Walls, Lijie Xu, Tian Guo
Distributed training frameworks, like TensorFlow, have been proposed as a means to reduce the training time of deep learning models by using a cluster of GPU servers.