Search Results for author: Huanqian Yan

Found 11 papers, 6 papers with code

CapGen:An Environment-Adaptive Generator of Adversarial Patches

no code implementations10 Dec 2024 CHAOQUN LI, Zhuodong Liu, Huanqian Yan, Hang Su

Adversarial patches, often used to provide physical stealth protection for critical assets and assess perception algorithm robustness, usually neglect the need for visual harmony with the background environment, making them easily noticeable.

Global Challenge for Safe and Secure LLMs Track 1

no code implementations21 Nov 2024 Xiaojun Jia, Yihao Huang, Yang Liu, Peng Yan Tan, Weng Kuan Yau, Mun-Thye Mak, Xin Ming Sim, Wee Siong Ng, See Kiong Ng, Hanqing Liu, Lifeng Zhou, Huanqian Yan, Xiaobing Sun, Wei Liu, Long Wang, Yiming Qian, Yong liu, Junxiao Yang, Zhexin Zhang, Leqi Lei, Renmiao Chen, Yida Lu, Shiyao Cui, Zizhou Wang, Shaohua Li, Yan Wang, Rick Siow Mong Goh, Liangli Zhen, Yingjie Zhang, Zhe Zhao

This paper introduces the Global Challenge for Safe and Secure Large Language Models (LLMs), a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO) to foster the development of advanced defense mechanisms against automated jailbreaking attacks.

Misinformation

Prompt-Guided Environmentally Consistent Adversarial Patch

no code implementations15 Nov 2024 CHAOQUN LI, Huanqian Yan, Lifeng Zhou, Tairan Chen, Zhuodong Liu, Hang Su

Adversarial attacks in the physical world pose a significant threat to the security of vision-based systems, such as facial recognition and autonomous driving.

Autonomous Driving

Boosting Jailbreak Transferability for Large Language Models

1 code implementation21 Oct 2024 Hanqing Liu, Lifeng Zhou, Huanqian Yan

Large language models have drawn significant attention to the challenge of safe alignment, especially regarding jailbreak attacks that circumvent security measures to produce harmful content.

Enhancing Transferability of Adversarial Examples with Spatial Momentum

no code implementations25 Mar 2022 Guoqiu Wang, Huanqian Yan, Xingxing Wei

For that, we propose a novel method named Spatial Momentum Iterative FGSM attack (SMI-FGSM), which introduces the mechanism of momentum accumulation from temporal domain to spatial domain by considering the context information from different regions within the image.

Adversarial Attack

Generating Transferable Adversarial Patch by Simultaneously Optimizing its Position and Perturbations

no code implementations29 Sep 2021 Xingxing Wei, Ying Guo, Jie Yu, Huanqian Yan, Bo Zhang

In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting.

Face Recognition Position

An Effective and Robust Detector for Logo Detection

2 code implementations1 Aug 2021 Xiaojun Jia, Huanqian Yan, Yonglin Wu, Xingxing Wei, Xiaochun Cao, Yong Zhang

Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams.

Data Augmentation

Improving Adversarial Transferability with Gradient Refining

1 code implementation11 May 2021 Guoqiu Wang, Huanqian Yan, Ying Guo, Xingxing Wei

To improve the transferability of adversarial examples for the black-box setting, several methods have been proposed, e. g., input diversity, translation-invariant attack, and momentum-based attack.

Adversarial Attack Diversity +1

Object Hider: Adversarial Patch Attack Against Object Detectors

1 code implementation28 Oct 2020 Yusheng Zhao, Huanqian Yan, Xingxing Wei

Additionally, we have applied the proposed methods to competition "Adversarial Challenge on Object Detection" that is organized by Alibaba on the Tianchi platform and won top 7 in 1701 teams.

Adversarial Attack Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.