Search Results for author: Yan Kang

Found 41 papers, 9 papers with code

Generating, Fast and Slow: Scalable Parallel Video Generation with Video Interface Networks

no code implementations21 Mar 2025 Bhishma Dedhia, David Bourgin, Krishna Kumar Singh, Yuheng Li, Yan Kang, Zhan Xu, Niraj K. Jha, Yuchen Liu

At each diffusion step, VINs encode global semantics from the noisy input of local chunks and the encoded representations, in turn, guide DiTs in denoising chunks in parallel.

Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers

no code implementations22 Dec 2024 Haoran You, Connelly Barnes, Yuqian Zhou, Yan Kang, Zhenbang Du, Wei Zhou, Lingzhi Zhang, Yotam Nitzan, Xiaoyang Liu, Zhe Lin, Eli Shechtman, Sohrab Amirghodsi, Yingyan Celine Lin

To address this, we propose DiffRatio-MoD, a dynamic DiT inference framework with differentiable compression ratios, which automatically learns to dynamically route computation across layers and timesteps for each image token, resulting in Mixture-of-Depths (MoD) efficient DiT models.

Denoising Image Generation

DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization

no code implementations20 Dec 2024 Zihan Ding, Chi Jin, Difan Liu, Haitian Zheng, Krishna Kumar Singh, Qiang Zhang, Yan Kang, Zhe Lin, Yuchen Liu

In this work, we introduce a distillation method that combines variational score distillation and consistency distillation to achieve few-step video generation, maintaining both high quality and diversity.

Computational Efficiency Diversity +1

FedCoLLM: A Parameter-Efficient Federated Co-tuning Framework for Large and Small Language Models

no code implementations18 Nov 2024 Tao Fan, Yan Kang, Guoqiang Ma, Lixin Fan, Kai Chen, Qiang Yang

By adapting Large Language Models (LLMs) to domain-specific tasks or enriching them with domain-specific knowledge, we can fully harness the capabilities of LLMs.

Text Generation

Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection

no code implementations23 Sep 2024 Alireza Ganjdanesh, Yan Kang, Yuchen Liu, Richard Zhang, Zhe Lin, Heng Huang

Finally, with a selected configuration, we fine-tune our pruned experts to obtain our mixture of efficient experts.

Denoising

Personalized Federated Continual Learning via Multi-granularity Prompt

1 code implementation27 Jun 2024 Hao Yu, Xin Yang, Xin Gao, Yan Kang, Hao Wang, Junbo Zhang, Tianrui Li

In addition, we design a selective prompt fusion mechanism for aggregating knowledge of global prompts distilled from different clients.

Continual Learning Personalized Federated Learning

PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models

no code implementations18 Jun 2024 Tao Fan, Yan Kang, Weijing Chen, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang

In the context of real-world applications, leveraging large language models (LLMs) for domain-specific tasks often faces two major challenges: domain-specific knowledge privacy and constrained resources.

Decoder Language Modeling +4

FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models

1 code implementation4 Jun 2024 Tao Fan, Guoqiang Ma, Yan Kang, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang

However, a significant gap remains in the simultaneous mutual enhancement of both the server's LLM and clients' SLMs.

Text Generation Transfer Learning

FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation

no code implementations3 Jun 2024 Hanlin Gu, Jiahuan Luo, Yan Kang, Yuan YAO, Gongxi Zhu, Bowen Li, Lixin Fan, Qiang Yang

Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data.

Deep Learning Privacy Preserving +1

No Free Lunch Theorem for Privacy-Preserving LLM Inference

no code implementations31 May 2024 Xiaojin Zhang, Yahao Pang, Yan Kang, Wei Chen, Lixin Fan, Hai Jin, Qiang Yang

Therefore, it is essential to evaluate the balance between the risk of privacy leakage and loss of utility when conducting effective protection mechanisms.

Privacy Preserving

Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models

no code implementations CVPR 2024 Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu

Specifically, for single-denoising-step pruning, we develop a novel ranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify redundant tokens, and a similarity-based recovery method to restore tokens for the convolution operation.

Denoising

FedProK: Trustworthy Federated Class-Incremental Learning via Prototypical Feature Knowledge Transfer

no code implementations4 May 2024 Xin Gao, Xin Yang, Hao Yu, Yan Kang, Tianrui Li

Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL).

class-incremental learning Class Incremental Learning +3

FedEval-LLM: Federated Evaluation of Large Language Models on Downstream Tasks with Collective Wisdom

no code implementations18 Apr 2024 Yuanqin He, Yan Kang, Lixin Fan, Qiang Yang

To address these issues, we propose a Federated Evaluation framework of Large Language Models, named FedEval-LLM, that provides reliable performance measurements of LLMs on downstream tasks without the reliance on labeled test sets and external tools, thus ensuring strong privacy-preserving capability.

Federated Learning Privacy Preserving

Hyperparameter Optimization for SecureBoost via Constrained Multi-Objective Federated Learning

no code implementations6 Apr 2024 Yan Kang, Ziyao Ren, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang

This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.

Bayesian Optimization Hyperparameter Optimization +2

UMAIR-FPS: User-aware Multi-modal Animation Illustration Recommendation Fusion with Painting Style

1 code implementation16 Feb 2024 Yan Kang, Hao Lin, Mingjian Yang, Shin-Jye Lee

In the feature extract phase, for image features, we are the first to combine image painting style features with semantic features to construct a dual-output image encoder for enhancing representation.

Image Generation Multi-modal Recommendation +1

SNED: Superposition Network Architecture Search for Efficient Video Diffusion Model

no code implementations CVPR 2024 Zhengang Li, Yan Kang, Yuchen Liu, Difan Liu, Tobias Hinz, Feng Liu, Yanzhi Wang

Our method employs a supernet training paradigm that targets various model cost and resolution options using a weight-sharing method.

Video Generation

Quantitative perfusion maps using a novelty spatiotemporal convolutional neural network

no code implementations8 Dec 2023 Anbo Cao, Pin-Yu Le, Zhonghui Qie, Haseeb Hassan, Yingwei Guo, Asim Zaman, Jiaxi Lu, Xueqiang Zeng, Huihui Yang, Xiaoqiang Miao, Taiyu Han, Guangtao Huang, Yan Kang, Yu Luo, Jia Guo

The results indicate that the network can accurately estimate perfusion parameters, including cerebral blood volume (CBV), cerebral blood flow (CBF), and time to maximum of the residual function (Tmax).

SSIM

Grounding Foundation Models through Federated Transfer Learning: A General Framework

no code implementations29 Nov 2023 Yan Kang, Tao Fan, Hanlin Gu, Xiaojin Zhang, Lixin Fan, Qiang Yang

Motivated by the strong growth in FTL-FM research and the potential impact of FTL-FM on industrial applications, we propose an FTL-FM framework that formulates problems of grounding FMs in the federated learning setting, construct a detailed taxonomy based on the FTL-FM framework to categorize state-of-the-art FTL-FM works, and comprehensively overview FTL-FM works based on the proposed taxonomy.

Federated Learning Privacy Preserving +1

FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models

1 code implementation16 Oct 2023 Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang

FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.

Federated Learning parameter-efficient fine-tuning +1

Privacy in Large Language Models: Attacks, Defenses and Future Directions

no code implementations16 Oct 2023 Haoran Li, Yulin Chen, Jinglong Luo, Jiecong Wang, Hao Peng, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, Zenglin Xu, Bryan Hooi, Yangqiu Song

The advancement of large language models (LLMs) has significantly enhanced the ability to effectively tackle various downstream NLP tasks and unify these tasks into generative pipelines.

SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning

no code implementations20 Jul 2023 Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang

To fill this gap, we propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto optimal solutions that each solution is a set of hyperparameters achieving optimal tradeoff between utility loss, training cost, and privacy leakage.

Privacy Preserving Vertical Federated Learning

A Meta-learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated Learning

no code implementations28 May 2023 Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang

Motivated by this requirement, we propose a framework that (1) formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors.

Federated Learning Meta-Learning

Optimizing Privacy, Utility and Efficiency in Constrained Multi-Objective Federated Learning

no code implementations29 Apr 2023 Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang

Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.

Fairness Federated Learning

FedPass: Privacy-Preserving Vertical Federated Deep Learning with Adaptive Obfuscation

no code implementations30 Jan 2023 Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang

Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.

Deep Learning Privacy Preserving +1

Vertical Federated Learning: Concepts, Advances and Challenges

no code implementations23 Nov 2022 Yang Liu, Yan Kang, Tianyuan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, Qiang Yang

Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy.

Fairness Privacy Preserving +1

A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning

1 code implementation8 Sep 2022 Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang

We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely deployed VFL algorithms.

Privacy Preserving Vertical Federated Learning

Trading Off Privacy, Utility and Efficiency in Federated Learning

no code implementations1 Sep 2022 Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang

In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment.

Vertical Federated Learning

A Hybrid Self-Supervised Learning Framework for Vertical Federated Learning

1 code implementation18 Aug 2022 Yuanqin He, Yan Kang, Xinyuan Zhao, Jiahuan Luo, Lixin Fan, Yuxing Han, Qiang Yang

In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.

Inference Attack Representation Learning +2

Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning

no code implementations10 Dec 2021 Yang Liu, Tianyuan Zou, Yan Kang, Wenhan Liu, Yuanqin He, Zhihao Yi, Qiang Yang

An immediate defense strategy is to protect sample-level messages communicated with Homomorphic Encryption (HE), and in this way only the batch-averaged local gradients are exposed to each party (termed black-boxed VFL).

Inference Attack Vertical Federated Learning

Privacy-preserving Federated Adversarial Domain Adaption over Feature Groups for Interpretability

no code implementations22 Nov 2021 Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, Qiang Yang

We present a novel privacy-preserving federated adversarial domain adaptation approach ($\textbf{PrADA}$) to address an under-studied but practical cross-silo federated domain adaptation problem, in which the party of the target domain is insufficient in both samples and features.

Domain Adaptation Privacy Preserving +1

FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning

3 code implementations16 Nov 2021 Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, Qiang Yang

Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data.

Federated Learning Privacy Preserving

SecureBoost+: Large Scale and High-Performance Vertical Federated Gradient Boosting Decision Tree

no code implementations21 Oct 2021 Tao Fan, Weijing Chen, Guoqiang Ma, Yan Kang, Lixin Fan, Qiang Yang

Gradient boosting decision tree (GBDT) is an ensemble machine learning algorithm, which is widely used in industry, due to its good performance and easy interpretation.

Privacy Preserving Vertical Federated Learning

Federated Deep Learning with Bayesian Privacy

no code implementations27 Sep 2021 Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang

To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.

Deep Learning Federated Learning +2

FedCVT: Semi-supervised Vertical Federated Learning with Cross-view Training

1 code implementation25 Aug 2020 Yan Kang, Yang Liu, Xinle Liang

In this article, we propose Federated Cross-view Training (FedCVT), a semi-supervised learning approach that improves the performance of the VFL model with limited aligned samples.

Representation Learning Vertical Federated Learning

A Communication Efficient Collaborative Learning Framework for Distributed Features

no code implementations24 Dec 2019 Yang Liu, Yan Kang, Xinwei Zhang, Liping Li, Yong Cheng, Tianjian Chen, Mingyi Hong, Qiang Yang

We introduce a collaborative learning framework allowing multiple parties having different sets of attributes about the same user to jointly build models without exposing their raw data or model parameters.

Secure and Efficient Federated Transfer Learning

no code implementations29 Oct 2019 Shreya Sharma, Xing Chaoping, Yang Liu, Yan Kang

Federated Transfer Learning (FTL) was introduced in [1] to improve statistical models under a data federation that allow knowledge to be shared without compromising user privacy, and enable complementary knowledge to be transferred in the network.

Cryptography and Security

Secure Federated Transfer Learning

no code implementations8 Dec 2018 Yang Liu, Yan Kang, Chaoping Xing, Tianjian Chen, Qiang Yang

A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation.

BIG-bench Machine Learning Privacy Preserving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.