Search Results for author: Yan Kang

Found 26 papers, 5 papers with code

UMAIR-FPS: User-aware Multi-modal Animation Illustration Recommendation Fusion with Painting Style

no code implementations16 Feb 2024 Yan Kang, Hao Lin, Mingjian Yang, Shin-Jye Lee

In the feature extract phase, for image features, we are the first to combine image painting style features with semantic features to construct a dual-output image encoder for enhancing representation.

Image Generation Recommendation Systems +1

Quantitative perfusion maps using a novelty spatiotemporal convolutional neural network

no code implementations8 Dec 2023 Anbo Cao, Pin-Yu Le, Zhonghui Qie, Haseeb Hassan, Yingwei Guo, Asim Zaman, Jiaxi Lu, Xueqiang Zeng, Huihui Yang, Xiaoqiang Miao, Taiyu Han, Guangtao Huang, Yan Kang, Yu Luo, Jia Guo

The results indicate that the network can accurately estimate perfusion parameters, including cerebral blood volume (CBV), cerebral blood flow (CBF), and time to maximum of the residual function (Tmax).


Grounding Foundation Models through Federated Transfer Learning: A General Framework

no code implementations29 Nov 2023 Yan Kang, Tao Fan, Hanlin Gu, Xiaojin Zhang, Lixin Fan, Qiang Yang

Motivated by the strong growth in FTL-FM research and the potential impact of FTL-FM on industrial applications, we propose an FTL-FM framework that formulates problems of grounding FMs in the federated learning setting, construct a detailed taxonomy based on the FTL-FM framework to categorize state-of-the-art FTL-FM works, and comprehensively overview FTL-FM works based on the proposed taxonomy.

Federated Learning Privacy Preserving +1

Privacy in Large Language Models: Attacks, Defenses and Future Directions

no code implementations16 Oct 2023 Haoran Li, Yulin Chen, Jinglong Luo, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, Yangqiu Song

The advancement of large language models (LLMs) has significantly enhanced the ability to effectively tackle various downstream NLP tasks and unify these tasks into generative pipelines.

FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models

1 code implementation16 Oct 2023 Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang

FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.

Federated Learning Privacy Preserving

SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning

no code implementations20 Jul 2023 Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang

To fill this gap, we propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto optimal solutions that each solution is a set of hyperparameters achieving optimal tradeoff between utility loss, training cost, and privacy leakage.

Federated Learning Privacy Preserving

A Meta-learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated Learning

no code implementations28 May 2023 Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang

Motivated by this requirement, we propose a framework that (1) formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors.

Federated Learning Meta-Learning

Optimizing Privacy, Utility and Efficiency in Constrained Multi-Objective Federated Learning

no code implementations29 Apr 2023 Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang

Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.

Fairness Federated Learning

FedPass: Privacy-Preserving Vertical Federated Deep Learning with Adaptive Obfuscation

no code implementations30 Jan 2023 Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang

Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.

Federated Learning Privacy Preserving

Vertical Federated Learning: Concepts, Advances and Challenges

no code implementations23 Nov 2022 Yang Liu, Yan Kang, Tianyuan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, Qiang Yang

Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy.

Fairness Federated Learning +1

A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning

no code implementations8 Sep 2022 Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang

We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely-deployed VFL algorithms.

Federated Learning Privacy Preserving

Trading Off Privacy, Utility and Efficiency in Federated Learning

no code implementations1 Sep 2022 Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang

In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment.

Federated Learning

A Hybrid Self-Supervised Learning Framework for Vertical Federated Learning

1 code implementation18 Aug 2022 Yuanqin He, Yan Kang, Xinyuan Zhao, Jiahuan Luo, Lixin Fan, Yuxing Han, Qiang Yang

In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.

Federated Learning Inference Attack +2

Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning

no code implementations10 Dec 2021 Yang Liu, Tianyuan Zou, Yan Kang, Wenhan Liu, Yuanqin He, Zhihao Yi, Qiang Yang

An immediate defense strategy is to protect sample-level messages communicated with Homomorphic Encryption (HE), and in this way only the batch-averaged local gradients are exposed to each party (termed black-boxed VFL).

Federated Learning Inference Attack

Privacy-preserving Federated Adversarial Domain Adaption over Feature Groups for Interpretability

no code implementations22 Nov 2021 Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, Qiang Yang

We present a novel privacy-preserving federated adversarial domain adaptation approach ($\textbf{PrADA}$) to address an under-studied but practical cross-silo federated domain adaptation problem, in which the party of the target domain is insufficient in both samples and features.

Domain Adaptation Federated Learning +1

FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning

2 code implementations16 Nov 2021 Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, Qiang Yang

Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data.

Federated Learning Privacy Preserving

Federated Deep Learning with Bayesian Privacy

no code implementations27 Sep 2021 Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang

To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.

Federated Learning Image Classification +1

FedCVT: Semi-supervised Vertical Federated Learning with Cross-view Training

no code implementations25 Aug 2020 Yan Kang, Yang Liu, Xinle Liang

In this article, we propose Federated Cross-view Training (FedCVT), a semi-supervised learning approach that improves the performance of the VFL model with limited aligned samples.

Federated Learning Representation Learning

A Communication Efficient Collaborative Learning Framework for Distributed Features

no code implementations24 Dec 2019 Yang Liu, Yan Kang, Xinwei Zhang, Liping Li, Yong Cheng, Tianjian Chen, Mingyi Hong, Qiang Yang

We introduce a collaborative learning framework allowing multiple parties having different sets of attributes about the same user to jointly build models without exposing their raw data or model parameters.

Secure and Efficient Federated Transfer Learning

no code implementations29 Oct 2019 Shreya Sharma, Xing Chaoping, Yang Liu, Yan Kang

Federated Transfer Learning (FTL) was introduced in [1] to improve statistical models under a data federation that allow knowledge to be shared without compromising user privacy, and enable complementary knowledge to be transferred in the network.

Cryptography and Security

Secure Federated Transfer Learning

no code implementations8 Dec 2018 Yang Liu, Yan Kang, Chaoping Xing, Tianjian Chen, Qiang Yang

A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation.

BIG-bench Machine Learning Privacy Preserving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.