Search Results for author: Yuanqin He

Found 9 papers, 2 papers with code

FedEval-LLM: Federated Evaluation of Large Language Models on Downstream Tasks with Collective Wisdom

no code implementations18 Apr 2024 Yuanqin He, Yan Kang, Lixin Fan, Qiang Yang

To address these issues, we propose a Federated Evaluation framework of Large Language Models, named FedEval-LLM, that provides reliable performance measurements of LLMs on downstream tasks without the reliance on labeled test sets and external tools, thus ensuring strong privacy-preserving capability.

Federated Learning Privacy Preserving

A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models

no code implementations24 Oct 2023 Yuanfeng Song, Yuanqin He, Xuefang Zhao, Hanlin Gu, Di Jiang, Haijun Yang, Lixin Fan, Qiang Yang

The springing up of Large Language Models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm.

Multi-Task Learning Prompt Engineering

Optimizing Privacy, Utility and Efficiency in Constrained Multi-Objective Federated Learning

no code implementations29 Apr 2023 Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang

Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.

Fairness Federated Learning

Vertical Federated Learning: Concepts, Advances and Challenges

no code implementations23 Nov 2022 Yang Liu, Yan Kang, Tianyuan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, Qiang Yang

Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy.

Fairness Privacy Preserving +1

A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning

no code implementations8 Sep 2022 Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang

We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely-deployed VFL algorithms.

Privacy Preserving Vertical Federated Learning

A Hybrid Self-Supervised Learning Framework for Vertical Federated Learning

1 code implementation18 Aug 2022 Yuanqin He, Yan Kang, Xinyuan Zhao, Jiahuan Luo, Lixin Fan, Yuxing Han, Qiang Yang

In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.

Inference Attack Representation Learning +2

Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning

no code implementations10 Dec 2021 Yang Liu, Tianyuan Zou, Yan Kang, Wenhan Liu, Yuanqin He, Zhihao Yi, Qiang Yang

An immediate defense strategy is to protect sample-level messages communicated with Homomorphic Encryption (HE), and in this way only the batch-averaged local gradients are exposed to each party (termed black-boxed VFL).

Inference Attack Vertical Federated Learning

FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning

2 code implementations16 Nov 2021 Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, Qiang Yang

Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data.

Federated Learning Privacy Preserving

Self-supervised Cross-silo Federated Neural Architecture Search

no code implementations28 Jan 2021 Xinle Liang, Yang Liu, Jiahuan Luo, Yuanqin He, Tianjian Chen, Qiang Yang

Federated Learning (FL) provides both model performance and data privacy for machine learning tasks where samples or features are distributed among different parties.

Neural Architecture Search Vertical Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.