Search Results for author: Zikai Zhang

Found 12 papers, 5 papers with code

Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation

1 code implementation13 Jun 2025 Zikai Zhang, Ping Liu, Jiahao Xu, Rui Hu

By co-designing the proposed HLA strategies, we incorporate both the dynamic and intrinsic layer importance into the design of our HLA strategy.

Federated Learning

FlowerTune: A Cross-Domain Benchmark for Federated Fine-Tuning of Large Language Models

no code implementations3 Jun 2025 Yan Gao, Massimo Roberto Scamarcia, Javier Fernandez-Marques, Mohammad Naseri, Chong Shen Ng, Dimitris Stripelis, Zexi Li, Tao Shen, Jiamu Bai, Daoyuan Chen, Zikai Zhang, Rui Hu, InSeo Song, Lee KangYoon, Hong Jia, Ting Dang, Junyan Wang, Zheyuan Liu, Daniel Janes Beutel, Lingjuan Lyu, Nicholas D. Lane

Large Language Models (LLMs) have achieved state-of-the-art results across diverse domains, yet their development remains reliant on vast amounts of publicly available data, raising concerns about data scarcity and the lack of access to domain-specific, sensitive information.

Benchmarking Domain Adaptation +2

Traceable Black-box Watermarks for Federated Learning

no code implementations19 May 2025 Jiahao Xu, Rui Hu, Olivera Kotevska, Zikai Zhang

Based on the problem, we propose a novel server-side watermarking method, $\mathbf{TraMark}$, which creates a traceable watermarked model for each client, enabling verification of model leakage in black-box settings.

Federated Learning

ROMA: a Read-Only-Memory-based Accelerator for QLoRA-based On-Device LLM

no code implementations17 Mar 2025 WenQiang Wang, Yijia Zhang, Zikai Zhang, Guanting Huo, Hao Liang, Shijie Cao, Ningyi Xu

In this work, we propose ROMA, a QLoRA accelerator with a hybrid storage architecture that uses ROM for quantized base models and SRAM for LoRA weights and KV cache.

Detecting Backdoor Attacks in Federated Learning via Direction Alignment Inspection

1 code implementation CVPR 2025 Jiahao Xu, Zikai Zhang, Rui Hu

The distributed nature of training makes Federated Learning (FL) vulnerable to backdoor attacks, where malicious model updates aim to compromise the global model's performance on specific tasks.

Federated Learning

Identify Backdoored Model in Federated Learning via Individual Unlearning

1 code implementation1 Nov 2024 Jiahao Xu, Zikai Zhang, Rui Hu

Inspired by this, we propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL.

Anomaly Detection Federated Learning +1

Fed-pilot: Optimizing LoRA Allocation for Efficient Federated Fine-Tuning with Heterogeneous Clients

no code implementations14 Oct 2024 Zikai Zhang, Rui Hu, Ping Liu, Jiahao Xu

Federated Learning enables the fine-tuning of foundation models (FMs) across distributed clients for specific tasks; however, its scalability is limited by the heterogeneity of client memory capacities.

Federated Learning

Federated Learning for Smart Grid: A Survey on Applications and Potential Vulnerabilities

no code implementations16 Sep 2024 Zikai Zhang, Suman Rath, Jiaohao Xu, Tingsong Xiao

Unlike traditional surveys addressing security issues in centralized machine learning methods for SG systems, this survey is the first to specifically examine the applications and security concerns unique to FL-based SG systems.

Federated Learning Survey

Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation

1 code implementation2 Sep 2024 Jiahao Xu, Zikai Zhang, Rui Hu

To address these challenges, we propose the Layer-Adaptive Sparsified Model Aggregation (LASA) approach, which combines pre-aggregation sparsification with layer-wise adaptive aggregation to improve robustness.

Federated Learning

Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy

no code implementations7 Sep 2023 Zikai Zhang, Rui Hu

Federated learning (FL) is designed to preserve data privacy during model training, where the data remains on the client side (i. e., IoT devices), and only model updates of clients are shared iteratively for collaborative learning.

Federated Learning

Distractor-Aware Fast Tracking via Dynamic Convolutions and MOT Philosophy

1 code implementation CVPR 2021 Zikai Zhang, Bineng Zhong, Shengping Zhang, Zhenjun Tang, Xin Liu, Zhaoxiang Zhang

A practical long-term tracker typically contains three key properties, i. e. an efficient model design, an effective global re-detection strategy and a robust distractor awareness mechanism.

Multiple Object Tracking Philosophy

Cannot find the paper you are looking for? You can Submit a new open access paper.