Search Results for author: Ningfei Wang

Found 11 papers, 0 papers with code

Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective

no code implementations15 Sep 2024 Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen

We design new attack success metrics that can mathematically model the impacts of such design on the TSR system-level attack success, and use them to revisit existing attacks.

Adversarial Attack Memorization +1

ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving

no code implementations9 Jun 2024 Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, Chao Shen

Extensive evaluations demonstrate the superior performance of ControlLoc, achieving an impressive average attack success rate of around 98. 1% across various AD visual perceptions and datasets, which is four times greater effectiveness than the existing hijacking attack.

Autonomous Driving Multiple Object Tracking +3

SlowPerception: Physical-World Latency Attack against Visual Perception in Autonomous Driving

no code implementations9 Jun 2024 Chen Ma, Ningfei Wang, Zhengyu Zhao, Qi Alfred Chen, Chao Shen

Additionally, we conduct AD system-level impact assessments, such as vehicle collisions, using industry-grade AD systems with production-grade AD simulators with a 97% average rate.

Autonomous Driving Multiple Object Tracking +2

Towards Robustness Analysis of E-Commerce Ranking System

no code implementations7 Mar 2024 Ningfei Wang, Yupin Huang, Han Cheng, Jiri Gesi, Xiaojie Wang, Vivek Mittal

As e-commerce retailers use various techniques to improve the quality of search results, we hope that this research offers valuable guidance for measuring the robustness of the ranking systems.

Information Retrieval

SlowTrack: Increasing the Latency of Camera-based Perception in Autonomous Driving Using Adversarial Examples

no code implementations15 Dec 2023 Chen Ma, Ningfei Wang, Qi Alfred Chen, Chao Shen

Our evaluation results show that the system-level effects can be significantly improved, i. e., the vehicle crash rate of SlowTrack is around 95% on average while existing works only have around 30%.

Autonomous Driving object-detection +1

Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models

no code implementations CVPR 2024 Takami Sato, Justin Yue, Nanze Chen, Ningfei Wang, Qi Alfred Chen

The NDD attack shows a significantly high capability to generate low-cost, model-agnostic, and transferable adversarial attacks by exploiting the natural attack capability in diffusion models.

Denoising Image Generation

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

no code implementations ICCV 2023 Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen

In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity.

Autonomous Driving

Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

no code implementations14 Sep 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical.

Lane Detection

Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack

no code implementations3 Mar 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.

Adversarial Attack

Interpretable Deep Learning under Fire

no code implementations3 Dec 2018 Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang

The improved interpretability is believed to offer a sense of security by involving human in the decision-making process.

Decision Making Deep Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.