no code implementations • 11 Aug 2023 • Jiajun Luo, Trambak Banerjee, Gourab Mukherjee, Wenguang Sun
As the dimension of the auxiliary data increases, we accurately quantify the improvements in estimation risk and the associated deterioration in convergence rate.
1 code implementation • 11 Mar 2023 • Zheqi Zhu, Yuchen Shi, Jiajun Luo, Fei Wang, Chenghui Peng, Pingyi Fan, Khaled B. Letaief
By adopting layer-wise pruning in local training and federated updating, we formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models.
no code implementations • 7 Jul 2022 • Yulin Shao, Yucheng Cai, Taotao Wang, Ziyang Guo, Peng Liu, Jiajun Luo, Deniz Gunduz
We consider the problem of autonomous channel access (AutoCA), where a group of terminals tries to discover a communication strategy with an access point (AP) via a common wireless channel in a distributed fashion.
no code implementations • ICLR 2018 • Chenwei Wu, Jiajun Luo, Jason D. Lee
Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.