1 code implementation • 16 Oct 2023 • Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang
FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.
no code implementations • 22 Nov 2021 • Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, Qiang Yang
We present a novel privacy-preserving federated adversarial domain adaptation approach ($\textbf{PrADA}$) to address an under-studied but practical cross-silo federated domain adaptation problem, in which the party of the target domain is insufficient in both samples and features.
1 code implementation • 21 Oct 2021 • Weijing Chen, Guoqiang Ma, Tao Fan, Yan Kang, Qian Xu, Qiang Yang
Gradient boosting decision tree (GBDT) is a widely used ensemble algorithm in the industry.