no code implementations • 23 Feb 2025 • Sadia Qureshi, Thanveer Shaik, Xiaohui Tao, Haoran Xie, Lin Li, Jianming Yong, Xiaohua Jia
The growing demand for data privacy in Machine Learning (ML) applications has seen Machine Unlearning (MU) emerge as a critical area of research.
no code implementations • 14 Feb 2025 • Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
Time-aware data valuation enhances training efficiency and model robustness, as early detection of harmful samples could prevent months of wasted computation.
3 code implementations • 13 Feb 2025 • Wenbo Pan, Zhichao Liu, Qiguang Chen, Xiangyang Zhou, Haining Yu, Xiaohua Jia
We then measure how different directions promote or suppress the dominant direction, showing the important role of secondary directions in shaping the model's refusal representation.
1 code implementation • 11 Feb 2025 • Sen Peng, Mingyue Wang, Jianfei He, Jijia Yang, Xiaohua Jia
In this paper, we first reveal that the primary reason adversarial examples are effective as protective perturbations in latent diffusion models is the distortion of their latent representations, as demonstrated through qualitative and quantitative experiments.
no code implementations • 25 Dec 2024 • Sen Peng, Jijia Yang, Mingyue Wang, Jianfei He, Xiaohua Jia
Diffusion-based text-to-image models have shown immense potential for various image-related tasks.
no code implementations • 29 Oct 2024 • Jijia Yang, Sen Peng, Xiaohua Jia
In practical application, the widespread deployment of diffusion models often necessitates substantial investment in training.
no code implementations • 18 Sep 2024 • Haodi Wang, Tangyu Jiang, Yu Guo, Chengjun Cai, Cong Wang, Xiaohua Jia
Deep learning models have been extensively adopted in various regions due to their ability to represent hierarchical features, which highly rely on the training set and procedures.
1 code implementation • 17 Apr 2024 • Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for \textbf{L}arge \textbf{M}odels, called \textbf{LM}Eraser.
1 code implementation • 14 Aug 2023 • Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation.
1 code implementation • 6 Jun 2023 • Sen Peng, Yufei Chen, Cong Wang, Xiaohua Jia
This paper introduces WDM, a novel watermarking solution for diffusion models without imprinting the watermark during task generation.
no code implementations • 16 Feb 2022 • Songlei Wang, Yifeng Zheng, Xiaohua Jia
With the proliferation of cloud computing, it is increasingly popular to deploy the services of complex and resource-intensive model training and inference in the cloud due to its prominent benefits.
no code implementations • 20 Oct 2021 • Jindi Zhang, Yifan Zhang, Kejie Lu, JianPing Wang, Kui Wu, Xiaohua Jia, Bin Liu
In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme and the results confirm the effectiveness of our detection method.
1 code implementation • 6 Aug 2021 • Jindi Zhang, Yang Lou, JianPing Wang, Kui Wu, Kejie Lu, Xiaohua Jia
In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models.