1 code implementation • 13 Jun 2025 • Zikai Zhang, Ping Liu, Jiahao Xu, Rui Hu
By co-designing the proposed HLA strategies, we incorporate both the dynamic and intrinsic layer importance into the design of our HLA strategy.
no code implementations • 3 Jun 2025 • Yan Gao, Massimo Roberto Scamarcia, Javier Fernandez-Marques, Mohammad Naseri, Chong Shen Ng, Dimitris Stripelis, Zexi Li, Tao Shen, Jiamu Bai, Daoyuan Chen, Zikai Zhang, Rui Hu, InSeo Song, Lee KangYoon, Hong Jia, Ting Dang, Junyan Wang, Zheyuan Liu, Daniel Janes Beutel, Lingjuan Lyu, Nicholas D. Lane
Large Language Models (LLMs) have achieved state-of-the-art results across diverse domains, yet their development remains reliant on vast amounts of publicly available data, raising concerns about data scarcity and the lack of access to domain-specific, sensitive information.
no code implementations • 19 May 2025 • Jiahao Xu, Rui Hu, Olivera Kotevska, Zikai Zhang
Based on the problem, we propose a novel server-side watermarking method, $\mathbf{TraMark}$, which creates a traceable watermarked model for each client, enabling verification of model leakage in black-box settings.
no code implementations • 17 Mar 2025 • WenQiang Wang, Yijia Zhang, Zikai Zhang, Guanting Huo, Hao Liang, Shijie Cao, Ningyi Xu
In this work, we propose ROMA, a QLoRA accelerator with a hybrid storage architecture that uses ROM for quantized base models and SRAM for LoRA weights and KV cache.
1 code implementation • CVPR 2025 • Jiahao Xu, Zikai Zhang, Rui Hu
The distributed nature of training makes Federated Learning (FL) vulnerable to backdoor attacks, where malicious model updates aim to compromise the global model's performance on specific tasks.
1 code implementation • 1 Nov 2024 • Jiahao Xu, Zikai Zhang, Rui Hu
Inspired by this, we propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL.
no code implementations • 14 Oct 2024 • Zikai Zhang, Rui Hu, Ping Liu, Jiahao Xu
Federated Learning enables the fine-tuning of foundation models (FMs) across distributed clients for specific tasks; however, its scalability is limited by the heterogeneity of client memory capacities.
no code implementations • 16 Sep 2024 • Zikai Zhang, Suman Rath, Jiaohao Xu, Tingsong Xiao
Unlike traditional surveys addressing security issues in centralized machine learning methods for SG systems, this survey is the first to specifically examine the applications and security concerns unique to FL-based SG systems.
1 code implementation • 2 Sep 2024 • Jiahao Xu, Zikai Zhang, Rui Hu
To address these challenges, we propose the Layer-Adaptive Sparsified Model Aggregation (LASA) approach, which combines pre-aggregation sparsification with layer-wise adaptive aggregation to improve robustness.
no code implementations • 7 Sep 2023 • Zikai Zhang, Rui Hu
Federated learning (FL) is designed to preserve data privacy during model training, where the data remains on the client side (i. e., IoT devices), and only model updates of clients are shared iteratively for collaborative learning.
1 code implementation • CVPR 2021 • Zikai Zhang, Bineng Zhong, Shengping Zhang, Zhenjun Tang, Xin Liu, Zhaoxiang Zhang
A practical long-term tracker typically contains three key properties, i. e. an efficient model design, an effective global re-detection strategy and a robust distractor awareness mechanism.
no code implementations • 27 May 2019 • Zikai Zhang, Yidong Li, Hairong Dong, Yizhe You, Fengping Zhao
Short term temporal dependency is captured with LSTM.