1 code implementation • 16 Apr 2024 • Tiannuo Yang, Wen Hu, Wangqi Peng, Yusen Li, Jianguo Li, Gang Wang, Xiaoguang Liu
However, due to the inherent characteristics of VDMS, automatic performance tuning for VDMS faces several critical challenges, which cannot be well addressed by the existing auto-tuning methods.
no code implementations • 2 Feb 2024 • Liping Yi, Han Yu, Chao Ren, Heng Zhang, Gang Wang, Xiaoguang Liu, Xiaoxiao Li
It assigns a shared homogeneous small feature extractor and a local gating network for each client's local heterogeneous large model.
1 code implementation • 22 Dec 2023 • Yan Wang, Tongtong Su, Yusen Li, Jiuwen Cao, Gang Wang, Xiaoguang Liu
Specifically, we propose a plug-in reparameterized dynamic unit (RDU) to promote the performance and inference cost trade-off.
1 code implementation • 14 Dec 2023 • Liping Yi, Han Yu, Zhuan Shi, Gang Wang, Xiaoguang Liu, Lizhen Cui, Xiaoxiao Li
Existing MHPFL approaches often rely on a public dataset with the same nature as the learning task, or incur high computation and communication costs.
no code implementations • 12 Nov 2023 • Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu
To allow each data owner (a. k. a., FL clients) to train a heterogeneous and personalized local model based on its local data distribution, system resources and requirements on model structure, the field of model-heterogeneous personalized federated learning (MHPFL) has emerged.
no code implementations • 20 Oct 2023 • Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu, Xiaoxiao Li
Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
3 code implementations • 23 Mar 2023 • Liping Yi, Gang Wang, Xiaoguang Liu, Zhuan Shi, Han Yu
It is a communication and computation-efficient model-heterogeneous FL framework which trains a shared generalized global prediction header with representations extracted by heterogeneous extractors for clients' models at the FL server.
1 code implementation • ICLR 2023 • Jinsong Zhang, Qiang Fu, Xu Chen, Lun Du, Zelin Li, Gang Wang, Xiaoguang Liu, Shi Han, Dongmei Zhang
In more detail, penultimate layer outputs on the training set are considered as the representations of in-distribution (ID) data.
Ranked #11 on Out-of-Distribution Detection on ImageNet-1k vs Places
1 code implementation • 28 Sep 2022 • Yan Wang, Yusen Li, Gang Wang, Xiaoguang Liu
ConvNets can compete with transformers in high-level tasks by exploiting larger receptive fields.
no code implementations • 27 Sep 2017 • Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu
However, with the increasing size of DNN models and the large number of workers in practice, this typical data parallelism cannot achieve satisfactory training acceleration, since it usually suffers from the heavy communication cost due to transferring huge amount of information between workers and the parameter server.
no code implementations • 2 Jun 2016 • Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu
In this framework, we propose to aggregate the local models by ensemble, i. e., averaging the outputs of local models instead of the parameters.
no code implementations • 17 Jun 2015 • Shizhao Sun, Wei Chen, Li-Wei Wang, Xiaoguang Liu, Tie-Yan Liu
First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth.