Search Results for author: Jinhao Duan

Found 16 papers, 7 papers with code

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

no code implementations18 Mar 2024 Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li

While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected.

Ethics Fairness +1

Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Model

no code implementations29 Feb 2024 Hao Cheng, Erjia Xiao, Jindong Gu, Le Yang, Jinhao Duan, Jize Zhang, Jiahang Cao, Kaidi Xu, Renjing Xu

Large Vision-Language Models (LVLMs) rely on vision encoders and Large Language Models (LLMs) to exhibit remarkable capabilities on various multi-modal tasks in the joint space of vision and language.

Language Modelling Object Recognition +1

A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly

no code implementations4 Dec 2023 Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, Yue Zhang

In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks.

Language Modelling Large Language Model +3

Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?

no code implementations CVPR 2024 Zhengyue Zhao, Jinhao Duan, Kaidi Xu, Chenan Wang, Rui Zhangp Zidong Dup Qi Guo, Xing Hu

Although these studies have demonstrated the ability to protect images, it is essential to consider that these methods may not be entirely applicable in real-world scenarios.

RBFormer: Improve Adversarial Robustness of Transformer by Robust Bias

no code implementations23 Sep 2023 Hao Cheng, Jinhao Duan, Hui Li, Lyutianyang Zhang, Jiahang Cao, Ping Wang, Jize Zhang, Kaidi Xu, Renjing Xu

Recently, there has been a surge of interest and attention in Transformer-based structures, such as Vision Transformer (ViT) and Vision Multilayer Perceptron (VMLP).

Adversarial Robustness

Semantic Adversarial Attacks via Diffusion Models

1 code implementation14 Sep 2023 Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew Stamm, Kaidi Xu

Then there are two variants of this framework: 1) the Semantic Transformation (ST) approach fine-tunes the latent space of the generated image and/or the diffusion model itself; 2) the Latent Masking (LM) approach masks the latent space with another target image and local backpropagation-based interpretation methods.

Adversarial Attack

Exposing the Fake: Effective Diffusion-Generated Images Detection

no code implementations12 Jul 2023 RuiPeng Ma, Jinhao Duan, Fei Kong, Xiaoshuang Shi, Kaidi Xu

Image synthesis has seen significant advancements with the advent of diffusion-based generative models like Denoising Diffusion Probabilistic Models (DDPM) and text-to-image diffusion models.

Denoising Image Generation

Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models

2 code implementations3 Jul 2023 Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, Kaidi Xu

Large Language Models (LLMs) show promising results in language generation and instruction following but frequently "hallucinate", making their outputs less reliable.

Instruction Following Question Answering +4

Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training

1 code implementation3 Jun 2023 Pucheng Dang, Xing Hu, Kaidi Xu, Jinhao Duan, Di Huang, Husheng Han, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing.

Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

no code implementations2 Jun 2023 Zhengyue Zhao, Jinhao Duan, Xing Hu, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

This imperceptible protective noise makes the data almost unlearnable for diffusion models, i. e., diffusion models trained or fine-tuned on the protected data cannot generate high-quality and diverse images related to the protected training data.

Denoising Image Generation

An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization

1 code implementation26 May 2023 Fei Kong, Jinhao Duan, RuiPeng Ma, HengTao Shen, Xiaofeng Zhu, Xiaoshuang Shi, Kaidi Xu

Therefore, we also explore the robustness of diffusion models to MIA in the text-to-speech (TTS) task, which is an audio generation task.

Audio Generation Inference Attack +1

Improve Video Representation with Temporal Adversarial Augmentation

no code implementations28 Apr 2023 Jinhao Duan, Quanfu Fan, Hao Cheng, Xiaoshuang Shi, Kaidi Xu

In this paper, we introduce Temporal Adversarial Augmentation (TA), a novel video augmentation technique that utilizes temporal attention.

Are Diffusion Models Vulnerable to Membership Inference Attacks?

1 code implementation2 Feb 2023 Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu

In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.