1 code implementation • 14 Jun 2023 • Jianping Zhang, Zhuoer Xu, Shiwen Cui, Changhua Meng, Weibin Wu, Michael R. Lyu
Therefore, in this paper, we aim to analyze the robustness of latent diffusion models more thoroughly.
no code implementations • 23 May 2023 • Wenxuan Wang, Jingyuan Huang, Chang Chen, Jiazhen Gu, Jianping Zhang, Weibin Wu, Pinjia He, Michael Lyu
To this end, content moderation software has been widely deployed on these platforms to detect and blocks toxic content.
2 code implementations • CVPR 2023 • Jianping Zhang, Yizhan Huang, Weibin Wu, Michael R. Lyu
However, the variance of the back-propagated gradients in intermediate blocks of ViTs may still be large, which may make the generated adversarial samples focus on some model-specific features and get stuck in poor local optima.
1 code implementation • CVPR 2023 • Jianping Zhang, Jen-tse Huang, Wenxuan Wang, Yichen Li, Weibin Wu, Xiaosen Wang, Yuxin Su, Michael R. Lyu
However, such methods selected the image augmentation path heuristically and may augment images that are semantics-inconsistent with the target images, which harms the transferability of the generated adversarial samples.
1 code implementation • 11 Feb 2023 • Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang, Shuqing Li, Pinjia He, Michael Lyu
In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5. 9% EFR) while maintaining the accuracy on the original test set.
2 code implementations • CVPR 2022 • Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, Michael R. Lyu
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples.
1 code implementation • CVPR 2021 • Weibin Wu, Yuxin Su, Michael R. Lyu, Irwin King
Although deep neural networks (DNNs) have achieved tremendous performance in diverse vision challenges, they are surprisingly susceptible to adversarial examples, which are born of intentionally perturbing benign samples in a human-imperceptible fashion.
no code implementations • CVPR 2020 • Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai
The widespread deployment of deep models necessitates the assessment of model vulnerability in practice, especially for safety- and security-sensitive domains such as autonomous driving and medical diagnosis.
no code implementations • CVPR 2020 • Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai
With the growing prevalence of convolutional neural networks (CNNs), there is an urgent demand to explain their behaviors.