Search Results for author: Yifeng Cai

Found 5 papers, 2 papers with code

Moss: Proxy Model-based Full-Weight Aggregation in Federated Learning with Heterogeneous Models

no code implementations13 Mar 2025 Yifeng Cai, Ziqi Zhang, Ding Li, Yao Guo, Xiangqun Chen

Modern Federated Learning (FL) has become increasingly essential for handling highly heterogeneous mobile devices.

Federated Learning

TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models

no code implementations15 Nov 2024 Ding Li, Ziqi Zhang, Mengyu Yao, Yifeng Cai, Yao Guo, Xiangqun Chen

Our approach can compress the private functionalities of the large language model to lightweight slices and achieve the same level of protection as the shielding-whole-model baseline.

Language Modeling Language Modelling +1

No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML

1 code implementation11 Oct 2023 Ziqi Zhang, Chen Gong, Yifeng Cai, Yuanyuan Yuan, Bingyan Liu, Ding Li, Yao Guo, Xiangqun Chen

These solutions, referred to as TEE-Shielded DNN Partition (TSDP), partition a DNN model into two parts, offloading the privacy-insensitive part to the GPU while shielding the privacy-sensitive part within the TEE.

Inference Attack Membership Inference Attack

DistFL: Distribution-aware Federated Learning for Mobile Scenarios

1 code implementation22 Oct 2021 Bingyan Liu, Yifeng Cai, Ziqi Zhang, Yuanchun Li, Leye Wang, Ding Li, Yao Guo, Xiangqun Chen

Previous studies focus on the "symptoms" directly, as they try to improve the accuracy or detect possible attacks by adding extra steps to conventional FL models.

Federated Learning Privacy Preserving

TransTailor: Pruning the Pre-trained Model for Improved Transfer Learning

no code implementations2 Mar 2021 Bingyan Liu, Yifeng Cai, Yao Guo, Xiangqun Chen

This paper aims to improve the transfer performance from another angle - in addition to tuning the weights, we tune the structure of pre-trained models, in order to better match the target task.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.