Search Results for author: Weibin Wu

Found 9 papers, 6 papers with code

On the Robustness of Latent Diffusion Models

1 code implementation14 Jun 2023 Jianping Zhang, Zhuoer Xu, Shiwen Cui, Changhua Meng, Weibin Wu, Michael R. Lyu

Therefore, in this paper, we aim to analyze the robustness of latent diffusion models more thoroughly.

Denoising Image Generation

Validating Multimedia Content Moderation Software via Semantic Fusion

no code implementations23 May 2023 Wenxuan Wang, Jingyuan Huang, Chang Chen, Jiazhen Gu, Jianping Zhang, Weibin Wu, Pinjia He, Michael Lyu

To this end, content moderation software has been widely deployed on these platforms to detect and blocks toxic content.

Sentence

Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization

2 code implementations CVPR 2023 Jianping Zhang, Yizhan Huang, Weibin Wu, Michael R. Lyu

However, the variance of the back-propagated gradients in intermediate blocks of ViTs may still be large, which may make the generated adversarial samples focus on some model-specific features and get stuck in poor local optima.

Improving the Transferability of Adversarial Samples by Path-Augmented Method

1 code implementation CVPR 2023 Jianping Zhang, Jen-tse Huang, Wenxuan Wang, Yichen Li, Weibin Wu, Xiaosen Wang, Yuxin Su, Michael R. Lyu

However, such methods selected the image augmentation path heuristically and may augment images that are semantics-inconsistent with the target images, which harms the transferability of the generated adversarial samples.

Image Augmentation

MTTM: Metamorphic Testing for Textual Content Moderation Software

1 code implementation11 Feb 2023 Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang, Shuqing Li, Pinjia He, Michael Lyu

In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5. 9% EFR) while maintaining the accuracy on the original test set.

Sentence

Improving the Transferability of Adversarial Samples With Adversarial Transformations

1 code implementation CVPR 2021 Weibin Wu, Yuxin Su, Michael R. Lyu, Irwin King

Although deep neural networks (DNNs) have achieved tremendous performance in diverse vision challenges, they are surprisingly susceptible to adversarial examples, which are born of intentionally perturbing benign samples in a human-imperceptible fashion.

Boosting the Transferability of Adversarial Samples via Attention

no code implementations CVPR 2020 Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai

The widespread deployment of deep models necessitates the assessment of model vulnerability in practice, especially for safety- and security-sensitive domains such as autonomous driving and medical diagnosis.

Autonomous Driving Medical Diagnosis

Cannot find the paper you are looking for? You can Submit a new open access paper.