Search Results for author: Mingjia Shi

Found 8 papers, 5 papers with code

E-3SFC: Communication-Efficient Federated Learning with Double-way Features Synthesizing

1 code implementation5 Feb 2025 Yuhao Zhou, Yuxin Tian, Mingjia Shi, Yuanxi Li, Yanan sun, Qing Ye, Jiancheng Lv

Specifically, we propose a systematical algorithm termed Extended Single-Step Synthetic Features Compressing (E-3SFC), which consists of three sub-components, i. e., the Single-Step Synthetic Features Compressor (3SFC), a double-way compression algorithm, and a communication budget scheduler.

Federated Learning Scheduling

Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation

no code implementations23 Jul 2024 Xinghao Wu, Jianwei Niu, Xuefeng Liu, Mingjia Shi, Guogang Zhu, Shaojie Tang

In this paper, we propose a new PFL framework called FedPFT to address the mismatch problem while enhancing the quality of the feature extractor.

Contrastive Learning Personalized Federated Learning

PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning

1 code implementation13 Oct 2023 Mingjia Shi, Yuhao Zhou, Kai Wang, Huaizheng Zhang, Shudong Huang, Qing Ye, Jiangcheng Lv

Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data.

Federated Learning

Personalized Federated Learning with Hidden Information on Personalized Prior

no code implementations19 Nov 2022 Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv

Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.

 Ranked #1 on Image Classification on Fashion-MNIST (Accuracy metric)

Classification Image Classification +2

DBS: Dynamic Batch Size For Distributed Deep Neural Network Training

1 code implementation23 Jul 2020 Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv

Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.

Cannot find the paper you are looking for? You can Submit a new open access paper.