no code implementations • 21 Mar 2025 • Bhishma Dedhia, David Bourgin, Krishna Kumar Singh, Yuheng Li, Yan Kang, Zhan Xu, Niraj K. Jha, Yuchen Liu
At each diffusion step, VINs encode global semantics from the noisy input of local chunks and the encoded representations, in turn, guide DiTs in denoising chunks in parallel.
no code implementations • 22 Dec 2024 • Haoran You, Connelly Barnes, Yuqian Zhou, Yan Kang, Zhenbang Du, Wei Zhou, Lingzhi Zhang, Yotam Nitzan, Xiaoyang Liu, Zhe Lin, Eli Shechtman, Sohrab Amirghodsi, Yingyan Celine Lin
To address this, we propose DiffRatio-MoD, a dynamic DiT inference framework with differentiable compression ratios, which automatically learns to dynamically route computation across layers and timesteps for each image token, resulting in Mixture-of-Depths (MoD) efficient DiT models.
no code implementations • 20 Dec 2024 • Zihan Ding, Chi Jin, Difan Liu, Haitian Zheng, Krishna Kumar Singh, Qiang Zhang, Yan Kang, Zhe Lin, Yuchen Liu
In this work, we introduce a distillation method that combines variational score distillation and consistency distillation to achieve few-step video generation, maintaining both high quality and diversity.
no code implementations • 18 Nov 2024 • Tao Fan, Yan Kang, Guoqiang Ma, Lixin Fan, Kai Chen, Qiang Yang
By adapting Large Language Models (LLMs) to domain-specific tasks or enriching them with domain-specific knowledge, we can fully harness the capabilities of LLMs.
no code implementations • 23 Sep 2024 • Alireza Ganjdanesh, Yan Kang, Yuchen Liu, Richard Zhang, Zhe Lin, Heng Huang
Finally, with a selected configuration, we fine-tune our pruned experts to obtain our mixture of efficient experts.
1 code implementation • 27 Jun 2024 • Hao Yu, Xin Yang, Xin Gao, Yan Kang, Hao Wang, Junbo Zhang, Tianrui Li
In addition, we design a selective prompt fusion mechanism for aggregating knowledge of global prompts distilled from different clients.
no code implementations • 18 Jun 2024 • Tao Fan, Yan Kang, Weijing Chen, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang
In the context of real-world applications, leveraging large language models (LLMs) for domain-specific tasks often faces two major challenges: domain-specific knowledge privacy and constrained resources.
1 code implementation • 4 Jun 2024 • Tao Fan, Guoqiang Ma, Yan Kang, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang
However, a significant gap remains in the simultaneous mutual enhancement of both the server's LLM and clients' SLMs.
no code implementations • 3 Jun 2024 • Hanlin Gu, Jiahuan Luo, Yan Kang, Yuan YAO, Gongxi Zhu, Bowen Li, Lixin Fan, Qiang Yang
Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data.
no code implementations • 31 May 2024 • Xiaojin Zhang, Yahao Pang, Yan Kang, Wei Chen, Lixin Fan, Hai Jin, Qiang Yang
Therefore, it is essential to evaluate the balance between the risk of privacy leakage and loss of utility when conducting effective protection mechanisms.
no code implementations • CVPR 2024 • Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu
Specifically, for single-denoising-step pruning, we develop a novel ranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify redundant tokens, and a similarity-based recovery method to restore tokens for the convolution operation.
no code implementations • 4 May 2024 • Xin Gao, Xin Yang, Hao Yu, Yan Kang, Tianrui Li
Federated Class-Incremental Learning (FCIL) focuses on continually transferring the previous knowledge to learn new classes in dynamic Federated Learning (FL).
no code implementations • 18 Apr 2024 • Yuanqin He, Yan Kang, Lixin Fan, Qiang Yang
To address these issues, we propose a Federated Evaluation framework of Large Language Models, named FedEval-LLM, that provides reliable performance measurements of LLMs on downstream tasks without the reliance on labeled test sets and external tools, thus ensuring strong privacy-preserving capability.
no code implementations • 6 Apr 2024 • Yan Kang, Ziyao Ren, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang
This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.
1 code implementation • 16 Feb 2024 • Yan Kang, Hao Lin, Mingjian Yang, Shin-Jye Lee
In the feature extract phase, for image features, we are the first to combine image painting style features with semantic features to construct a dual-output image encoder for enhancing representation.
no code implementations • CVPR 2024 • Zhengang Li, Yan Kang, Yuchen Liu, Difan Liu, Tobias Hinz, Feng Liu, Yanzhi Wang
Our method employs a supernet training paradigm that targets various model cost and resolution options using a weight-sharing method.
no code implementations • 27 Dec 2023 • Hanlin Gu, Xinyuan Zhao, Gongxi Zhu, Yuxing Han, Yan Kang, Lixin Fan, Qiang Yang
Concerns about utility, privacy, and training efficiency in FL have garnered significant research attention.
no code implementations • 8 Dec 2023 • Anbo Cao, Pin-Yu Le, Zhonghui Qie, Haseeb Hassan, Yingwei Guo, Asim Zaman, Jiaxi Lu, Xueqiang Zeng, Huihui Yang, Xiaoqiang Miao, Taiyu Han, Guangtao Huang, Yan Kang, Yu Luo, Jia Guo
The results indicate that the network can accurately estimate perfusion parameters, including cerebral blood volume (CBV), cerebral blood flow (CBF), and time to maximum of the residual function (Tmax).
no code implementations • 29 Nov 2023 • Yan Kang, Tao Fan, Hanlin Gu, Xiaojin Zhang, Lixin Fan, Qiang Yang
Motivated by the strong growth in FTL-FM research and the potential impact of FTL-FM on industrial applications, we propose an FTL-FM framework that formulates problems of grounding FMs in the federated learning setting, construct a detailed taxonomy based on the FTL-FM framework to categorize state-of-the-art FTL-FM works, and comprehensively overview FTL-FM works based on the proposed taxonomy.
1 code implementation • 16 Oct 2023 • Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, Qiang Yang
FATE-LLM (1) facilitates federated learning for large language models (coined FedLLM); (2) promotes efficient training of FedLLM using parameter-efficient fine-tuning methods; (3) protects the intellectual property of LLMs; (4) preserves data privacy during training and inference through privacy-preserving mechanisms.
no code implementations • 16 Oct 2023 • Haoran Li, Yulin Chen, Jinglong Luo, Jiecong Wang, Hao Peng, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, Zenglin Xu, Bryan Hooi, Yangqiu Song
The advancement of large language models (LLMs) has significantly enhanced the ability to effectively tackle various downstream NLP tasks and unify these tasks into generative pipelines.
no code implementations • 20 Jul 2023 • Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong, Qiang Yang
To fill this gap, we propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto optimal solutions that each solution is a set of hyperparameters achieving optimal tradeoff between utility loss, training cost, and privacy leakage.
no code implementations • 28 May 2023 • Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang
Motivated by this requirement, we propose a framework that (1) formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors.
no code implementations • 29 Apr 2023 • Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang
Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.
no code implementations • 30 Jan 2023 • Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang
Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.
no code implementations • 23 Nov 2022 • Yang Liu, Yan Kang, Tianyuan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, Qiang Yang
Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy.
1 code implementation • 8 Sep 2022 • Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang
We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely deployed VFL algorithms.
no code implementations • 1 Sep 2022 • Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang
In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment.
no code implementations • 26 Aug 2022 • Lichen Jia, Bowen Tang, Chenggang Wu, Zhe Wang, Zihan Jiang, Yuanming Lai, Yan Kang, Ning Liu, Jingfeng Zhang
The binary code similarity detection (BCSD) method measures the similarity of two binary executable codes.
1 code implementation • 18 Aug 2022 • Yuanqin He, Yan Kang, Xinyuan Zhao, Jiahuan Luo, Lixin Fan, Yuxing Han, Qiang Yang
In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.
no code implementations • 10 Dec 2021 • Yang Liu, Tianyuan Zou, Yan Kang, Wenhan Liu, Yuanqin He, Zhihao Yi, Qiang Yang
An immediate defense strategy is to protect sample-level messages communicated with Homomorphic Encryption (HE), and in this way only the batch-averaged local gradients are exposed to each party (termed black-boxed VFL).
no code implementations • 22 Nov 2021 • Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, Qiang Yang
We present a novel privacy-preserving federated adversarial domain adaptation approach ($\textbf{PrADA}$) to address an under-studied but practical cross-silo federated domain adaptation problem, in which the party of the target domain is insufficient in both samples and features.
3 code implementations • 16 Nov 2021 • Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, Qiang Yang
Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data.
no code implementations • 21 Oct 2021 • Tao Fan, Weijing Chen, Guoqiang Ma, Yan Kang, Lixin Fan, Qiang Yang
Gradient boosting decision tree (GBDT) is an ensemble machine learning algorithm, which is widely used in industry, due to its good performance and easy interpretation.
no code implementations • 27 Sep 2021 • Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang
To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.
1 code implementation • 25 Aug 2020 • Yan Kang, Yang Liu, Xinle Liang
In this article, we propose Federated Cross-view Training (FedCVT), a semi-supervised learning approach that improves the performance of the VFL model with limited aligned samples.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
no code implementations • 24 Dec 2019 • Yang Liu, Yan Kang, Xinwei Zhang, Liping Li, Yong Cheng, Tianjian Chen, Mingyi Hong, Qiang Yang
We introduce a collaborative learning framework allowing multiple parties having different sets of attributes about the same user to jointly build models without exposing their raw data or model parameters.
no code implementations • 29 Oct 2019 • Shreya Sharma, Xing Chaoping, Yang Liu, Yan Kang
Federated Transfer Learning (FTL) was introduced in [1] to improve statistical models under a data federation that allow knowledge to be shared without compromising user privacy, and enable complementary knowledge to be transferred in the network.
Cryptography and Security
no code implementations • 8 Dec 2018 • Yang Liu, Yan Kang, Chaoping Xing, Tianjian Chen, Qiang Yang
A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation.
no code implementations • 8 Nov 2017 • Ahmed Elgammal, Yan Kang, Milko Den Leeuw
We also propose and compare different classification methods at the drawing level.