1 code implementation • 26 Feb 2025 • Haoyang Li, Li Bai, Qingqing Ye, Haibo Hu, Yaxin Xiao, Huadi Zheng, Jianliang Xu
Model Inversion (MI) attacks, which reconstruct the training dataset of neural networks, pose significant privacy concerns in machine learning.
no code implementations • 28 Jan 2025 • Zitong Li, Qingqing Ye, Haibo Hu
To decrease the execution time of such machine unlearning methods, we aim to reduce the size of data removal requests based on the fundamental assumption that the removal of certain data would not result in a distinguishable retrained model.
1 code implementation • 21 Jan 2025 • Jiacheng Zuo, Haibo Hu, Zikang Zhou, Yufei Cui, Ziquan Liu, JianPing Wang, Nan Guan, Jin Wang, Chun Jason Xue
RALAD features three primary designs, including (1) domain adaptation via an enhanced Optimal Transport (OT) method that accounts for both individual and grouped image distances, (2) a simple and unified framework that can be applied to various models, and (3) efficient fine-tuning techniques that freeze the computationally expensive layers while maintaining robustness.
no code implementations • 10 Jan 2025 • Jiale Zhang, Bosen Rao, Chengcheng Zhu, Xiaobing Sun, Qingming Li, Haibo Hu, Xiapu Luo, Qingqing Ye, Shouling Ji
By adopting the graph attention transfer method, GRAPHNAD can effectively align the intermediate-layer attention representations of the backdoored model with that of the teacher model, forcing the backdoor neurons to transform into benign ones.
1 code implementation • 7 Jan 2025 • Sen Zhang, Qingqing Ye, Haibo Hu
Existing methods tackle this issue by developing deep graph learning models with differential privacy (DP).
no code implementations • 25 Dec 2024 • Zheyu Chen, Jinfeng Xu, Haibo Hu
The rapid expansion of multimedia contents has led to the emergence of multimodal recommendation systems.
no code implementations • 17 Nov 2024 • Zikang Zhou, Hengjian Zhou, Haibo Hu, Zihao Wen, JianPing Wang, Yung-Hui Li, Yu-Kai Huang
Anticipating the multimodality of future events lays the foundation for safe autonomous driving.
1 code implementation • 16 Oct 2024 • Yanyun Wang, Li Liu, Zi Liang, Qingqing Ye, Haibo Hu
Accordingly, to relax the tension between clean and robust learning derived from this overstrict assumption, we propose a new AT paradigm by introducing an additional dummy class for each original class, aiming to accommodate the hard adversarial samples with shifted distribution after perturbation.
2 code implementations • 4 Sep 2024 • Zi Liang, Qingqing Ye, Yanyun Wang, Sen Zhang, Yaxin Xiao, RongHua Li, Jianliang Xu, Haibo Hu
Model extraction attacks (MEAs) on large language models (LLMs) have received increasing attention in recent research.
1 code implementation • 5 Aug 2024 • Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Haoyang Li
In this paper, we analyze the underlying mechanism of prompt leakage, which we refer to as prompt memorization, and develop corresponding defending strategies.
1 code implementation • 24 Jun 2024 • Ziguang Li, Chao Huang, Xuliang Wang, Haibo Hu, Cole Wyeth, Dongbo Bu, Quan Yu, Wen Gao, Xingwu Liu, Ming Li
The better a large model understands the data, the better LMCompress compresses.
no code implementations • 20 Jun 2024 • Peijia Guo, Ziguang Li, Haibo Hu, Chao Huang, Ming Li, Rui Zhang
We conceptualize the process of understanding as information compression, and propose a method for ranking large language models (LLMs) based on lossless data compression.
no code implementations • 27 May 2024 • Zikang Zhou, Haibo Hu, Xinhong Chen, JianPing Wang, Nan Guan, Kui Wu, Yung-Hui Li, Yu-Kai Huang, Chun Jason Xue
Simulating realistic behaviors of traffic agents is pivotal for efficiently validating the safety of autonomous driving systems.
no code implementations • 25 Mar 2024 • Ziheng Deng, Hua Chen, Yongzheng Zhou, Haibo Hu, Zhiyong Xu, Jiayuan Sun, Tianling Lyu, Yan Xi, Yang Chen, Jun Zhao
We find that streak artifacts exhibit a unique rotational motion along with the patient's respiration, distinguishable from diaphragm-driven respiratory motion in the spatiotemporal domain.
2 code implementations • 23 Nov 2023 • Jie Fu, Qingqing Ye, Haibo Hu, Zhili Chen, Lulu Wang, Kuncan Wang, Xun Ran
Motivated by this, this paper proposes DPSUR, a Differentially Private training framework based on Selective Updates and Release, where the gradient from each iteration is evaluated based on a validation test, and only those updates leading to convergence are applied to the model.
2 code implementations • 14 Sep 2022 • Yanyun Wang, Dehui Du, Haibo Hu, Zi Liang, YuanHao Liu
Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC).
no code implementations • 4 Oct 2021 • Ying Qin, Wei Liu, Zhiyuan Peng, Si-Ioi Ng, Jingyu Li, Haibo Hu, Tan Lee
Input to these classifiers are speech transcripts produced by automatic speech recognition (ASR) models.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1