Search Results for author: Xiaoguang Liu

Found 12 papers, 6 papers with code

VDTuner: Automated Performance Tuning for Vector Data Management Systems

1 code implementation16 Apr 2024 Tiannuo Yang, Wen Hu, Wangqi Peng, Yusen Li, Jianguo Li, Gang Wang, Xiaoguang Liu

However, due to the inherent characteristics of VDMS, automatic performance tuning for VDMS faces several critical challenges, which cannot be well addressed by the existing auto-tuning methods.

Bayesian Optimization Information Retrieval +1

DDistill-SR: Reparameterized Dynamic Distillation Network for Lightweight Image Super-Resolution

1 code implementation22 Dec 2023 Yan Wang, Tongtong Su, Yusen Li, Jiuwen Cao, Gang Wang, Xiaoguang Liu

Specifically, we propose a plug-in reparameterized dynamic unit (RDU) to promote the performance and inference cost trade-off.

Image Super-Resolution

pFedES: Model Heterogeneous Personalized Federated Learning with Feature Extractor Sharing

no code implementations12 Nov 2023 Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu

To allow each data owner (a. k. a., FL clients) to train a heterogeneous and personalized local model based on its local data distribution, system resources and requirements on model structure, the field of model-heterogeneous personalized federated learning (MHPFL) has emerged.

Personalized Federated Learning Privacy Preserving +1

pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA Tuning

no code implementations20 Oct 2023 Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu, Xiaoxiao Li

Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.

Personalized Federated Learning

FedGH: Heterogeneous Federated Learning with Generalized Global Header

3 code implementations23 Mar 2023 Liping Yi, Gang Wang, Xiaoguang Liu, Zhuan Shi, Han Yu

It is a communication and computation-efficient model-heterogeneous FL framework which trains a shared generalized global prediction header with representations extracted by heterogeneous extractors for clients' models at the FL server.

Federated Learning Privacy Preserving

Multi-scale Attention Network for Single Image Super-Resolution

1 code implementation28 Sep 2022 Yan Wang, Yusen Li, Gang Wang, Xiaoguang Liu

ConvNets can compete with transformers in high-level tasks by exploiting larger receptive fields.

Blocking Image Super-Resolution +1

Slim-DP: A Light Communication Data Parallelism for DNN

no code implementations27 Sep 2017 Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu

However, with the increasing size of DNN models and the large number of workers in practice, this typical data parallelism cannot achieve satisfactory training acceleration, since it usually suffers from the heavy communication cost due to transferring huge amount of information between workers and the parameter server.

Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks

no code implementations2 Jun 2016 Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu

In this framework, we propose to aggregate the local models by ensemble, i. e., averaging the outputs of local models instead of the parameters.

Model Compression

On the Depth of Deep Neural Networks: A Theoretical View

no code implementations17 Jun 2015 Shizhao Sun, Wei Chen, Li-Wei Wang, Xiaoguang Liu, Tie-Yan Liu

First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth.

Cannot find the paper you are looking for? You can Submit a new open access paper.