no code implementations • 10 Dec 2024 • CHAOQUN LI, Zhuodong Liu, Huanqian Yan, Hang Su
Adversarial patches, often used to provide physical stealth protection for critical assets and assess perception algorithm robustness, usually neglect the need for visual harmony with the background environment, making them easily noticeable.
no code implementations • 21 Nov 2024 • Xiaojun Jia, Yihao Huang, Yang Liu, Peng Yan Tan, Weng Kuan Yau, Mun-Thye Mak, Xin Ming Sim, Wee Siong Ng, See Kiong Ng, Hanqing Liu, Lifeng Zhou, Huanqian Yan, Xiaobing Sun, Wei Liu, Long Wang, Yiming Qian, Yong liu, Junxiao Yang, Zhexin Zhang, Leqi Lei, Renmiao Chen, Yida Lu, Shiyao Cui, Zizhou Wang, Shaohua Li, Yan Wang, Rick Siow Mong Goh, Liangli Zhen, Yingjie Zhang, Zhe Zhao
This paper introduces the Global Challenge for Safe and Secure Large Language Models (LLMs), a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO) to foster the development of advanced defense mechanisms against automated jailbreaking attacks.
no code implementations • 15 Nov 2024 • CHAOQUN LI, Huanqian Yan, Lifeng Zhou, Tairan Chen, Zhuodong Liu, Hang Su
Adversarial attacks in the physical world pose a significant threat to the security of vision-based systems, such as facial recognition and autonomous driving.
1 code implementation • 21 Oct 2024 • Hanqing Liu, Lifeng Zhou, Huanqian Yan
Large language models have drawn significant attention to the challenge of safe alignment, especially regarding jailbreak attacks that circumvent security measures to produce harmful content.
no code implementations • 25 Mar 2022 • Guoqiu Wang, Huanqian Yan, Xingxing Wei
For that, we propose a novel method named Spatial Momentum Iterative FGSM attack (SMI-FGSM), which introduces the mechanism of momentum accumulation from temporal domain to spatial domain by considering the context information from different regions within the image.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
no code implementations • 29 Sep 2021 • Xingxing Wei, Ying Guo, Jie Yu, Huanqian Yan, Bo Zhang
In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting.
2 code implementations • 1 Aug 2021 • Xiaojun Jia, Huanqian Yan, Yonglin Wu, Xingxing Wei, Xiaochun Cao, Yong Zhang
Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams.
1 code implementation • 11 May 2021 • Guoqiu Wang, Huanqian Yan, Ying Guo, Xingxing Wei
To improve the transferability of adversarial examples for the black-box setting, several methods have been proposed, e. g., input diversity, translation-invariant attack, and momentum-based attack.
1 code implementation • 28 Oct 2020 • Yusheng Zhao, Huanqian Yan, Xingxing Wei
Additionally, we have applied the proposed methods to competition "Adversarial Challenge on Object Detection" that is organized by Alibaba on the Tianchi platform and won top 7 in 1701 teams.
1 code implementation • 11 Jan 2020 • Xingxing Wei, Huanqian Yan, Bo Li
Adversarial attacks on video recognition models have been explored recently.