Search Results for author: Guoqiang Ma

Found 3 papers, 2 papers with code

FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models

1 code implementation16 Oct 2023 Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang

FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.

Federated Learning Privacy Preserving

Privacy-preserving Federated Adversarial Domain Adaption over Feature Groups for Interpretability

no code implementations22 Nov 2021 Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, Qiang Yang

We present a novel privacy-preserving federated adversarial domain adaptation approach ($\textbf{PrADA}$) to address an under-studied but practical cross-silo federated domain adaptation problem, in which the party of the target domain is insufficient in both samples and features.

Domain Adaptation Privacy Preserving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.