Search Results for author: Xiaohua Jia

Found 13 papers, 6 papers with code

Exploring Incremental Unlearning: Techniques, Challenges, and Future Directions

no code implementations23 Feb 2025 Sadia Qureshi, Thanveer Shaik, Xiaohui Tao, Haoran Xie, Lin Li, Jianming Yong, Xiaohua Jia

The growing demand for data privacy in Machine Learning (ML) applications has seen Machine Unlearning (MU) emerge as a critical area of research.

Machine Unlearning Privacy Preserving

LiveVal: Time-aware Data Valuation via Adaptive Reference Points

no code implementations14 Feb 2025 Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

Time-aware data valuation enhances training efficiency and model robustness, as early detection of harmful samples could prevent months of wasted computation.

Data Valuation

The Hidden Dimensions of LLM Alignment: A Multi-Dimensional Safety Analysis

3 code implementations13 Feb 2025 Wenbo Pan, Zhichao Liu, Qiguang Chen, Xiangyang Zhou, Haining Yu, Xiaohua Jia

We then measure how different directions promote or suppress the dominant direction, showing the important role of secondary directions in shaping the model's refusal representation.

Safety Alignment

CAT: Contrastive Adversarial Training for Evaluating the Robustness of Protective Perturbations in Latent Diffusion Models

1 code implementation11 Feb 2025 Sen Peng, Mingyue Wang, Jianfei He, Jijia Yang, Xiaohua Jia

In this paper, we first reveal that the primary reason adversarial examples are effective as protective perturbations in latent diffusion models is the distortion of their latent representations, as demonstrated through qualitative and quantitative experiments.

Image Generation

Embedding Watermarks in Diffusion Process for Model Intellectual Property Protection

no code implementations29 Oct 2024 Jijia Yang, Sen Peng, Xiaohua Jia

In practical application, the widespread deployment of diffusion models often necessitates substantial investment in training.

backdoor defense

Training with Differential Privacy: A Gradient-Preserving Noise Reduction Approach with Provable Security

no code implementations18 Sep 2024 Haodi Wang, Tangyu Jiang, Yu Guo, Chengjun Cai, Cong Wang, Xiaohua Jia

Deep learning models have been extensively adopted in various regions due to their ability to represent hierarchical features, which highly rely on the training set and procedures.

Deep Learning

LMEraser: Large Model Unlearning through Adaptive Prompt Tuning

1 code implementation17 Apr 2024 Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for \textbf{L}arge \textbf{M}odels, called \textbf{LM}Eraser.

Diversity Machine Unlearning +1

Machine Unlearning: Solutions and Challenges

1 code implementation14 Aug 2023 Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation.

Machine Unlearning

Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process

1 code implementation6 Jun 2023 Sen Peng, Yufei Chen, Cong Wang, Xiaohua Jia

This paper introduces WDM, a novel watermarking solution for diffusion models without imprinting the watermark during task generation.

SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service

no code implementations16 Feb 2022 Songlei Wang, Yifeng Zheng, Xiaohua Jia

With the proliferation of cloud computing, it is increasingly popular to deploy the services of complex and resource-intensive model training and inference in the cloud due to its prominent benefits.

Cloud Computing Graph Neural Network

Detecting and Identifying Optical Signal Attacks on Autonomous Driving Systems

no code implementations20 Oct 2021 Jindi Zhang, Yifan Zhang, Kejie Lu, JianPing Wang, Kui Wu, Xiaohua Jia, Bin Liu

In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme and the results confirm the effectiveness of our detection method.

Autonomous Driving object-detection +1

Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

1 code implementation6 Aug 2021 Jindi Zhang, Yang Lou, JianPing Wang, Kui Wu, Kejie Lu, Xiaohua Jia

In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models.

3D Object Detection Autonomous Driving +2

Cannot find the paper you are looking for? You can Submit a new open access paper.