Search Results for author: Qi Alfred Chen

Found 29 papers, 8 papers with code

Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives

1 code implementation7 Jan 2025 Shaoyuan Xie, Lingdong Kong, Yuhao Dong, Chonghao Sima, Wenwei Zhang, Qi Alfred Chen, Ziwei Liu, Liang Pan

Additionally, we highlight the potential of leveraging VLMs' awareness of corruptions to enhance their reliability, offering a roadmap for developing more trustworthy and interpretable decision-making systems in real-world autonomous driving contexts.

Autonomous Driving General Knowledge +1

Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective

no code implementations15 Sep 2024 Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen

We design new attack success metrics that can mathematically model the impacts of such design on the TSR system-level attack success, and use them to revisit existing attacks.

Adversarial Attack Memorization +1

SlowPerception: Physical-World Latency Attack against Visual Perception in Autonomous Driving

no code implementations9 Jun 2024 Chen Ma, Ningfei Wang, Zhengyu Zhao, Qi Alfred Chen, Chao Shen

Additionally, we conduct AD system-level impact assessments, such as vehicle collisions, using industry-grade AD systems with production-grade AD simulators with a 97% average rate.

Autonomous Driving Multiple Object Tracking +2

ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving

no code implementations9 Jun 2024 Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, Chao Shen

Extensive evaluations demonstrate the superior performance of ControlLoc, achieving an impressive average attack success rate of around 98. 1% across various AD visual perceptions and datasets, which is four times greater effectiveness than the existing hijacking attack.

Autonomous Driving Multiple Object Tracking +3

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems

no code implementations27 May 2024 Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, YiXuan Wang, Qi Alfred Chen, Qi Zhu

Specifically, we propose three distinct attack mechanisms: word injection, scenario manipulation, and knowledge injection, targeting various components in the LLM-based decision-making pipeline.

Autonomous Driving Common Sense Reasoning +3

Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic Sign Perception

no code implementations7 Jan 2024 Takami Sato, Sri Hrushikesh Varma Bhupathiraju, Michael Clifford, Takeshi Sugawara, Qi Alfred Chen, Sara Rampazzi

We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras.

Traffic Sign Recognition

SlowTrack: Increasing the Latency of Camera-based Perception in Autonomous Driving Using Adversarial Examples

no code implementations15 Dec 2023 Chen Ma, Ningfei Wang, Qi Alfred Chen, Chao Shen

Our evaluation results show that the system-level effects can be significantly improved, i. e., the vehicle crash rate of SlowTrack is around 95% on average while existing works only have around 30%.

Autonomous Driving object-detection +1

On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures

1 code implementation22 Sep 2023 Qingzhao Zhang, Shuowei Jin, Ruiyang Zhu, Jiachen Sun, Xumiao Zhang, Qi Alfred Chen, Z. Morley Mao

To understand the impact of the vulnerability, we break the ground by proposing various real-time data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results, leading to hard brakes or increased collision risks.

Anomaly Detection Autonomous Vehicles

Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models

no code implementations CVPR 2024 Takami Sato, Justin Yue, Nanze Chen, Ningfei Wang, Qi Alfred Chen

The NDD attack shows a significantly high capability to generate low-cost, model-agnostic, and transferable adversarial attacks by exploiting the natural attack capability in diffusion models.

Denoising Image Generation

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

no code implementations ICCV 2023 Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen

In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity.

Autonomous Driving

LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken Assumptions, and New Attack Strategies

no code implementations19 Mar 2023 Takami Sato, Yuki Hayakawa, Ryo Suzuki, Yohsuke Shiiki, Kentaro Yoshioka, Qi Alfred Chen

To fill these critical research gaps, we conduct the first large-scale measurement study on LiDAR spoofing attack capabilities on object detectors with 9 popular LiDARs, covering both first- and new-generation LiDARs, and 3 major types of object detectors trained on 5 different datasets.

Autonomous Driving Object +2

Learning Representation for Anomaly Detection of Vehicle Trajectories

no code implementations9 Mar 2023 Ruochen Jiao, Juyang Bai, Xiangguo Liu, Takami Sato, Xiaowei Yuan, Qi Alfred Chen, Qi Zhu

We conduct extensive experiments to demonstrate that our supervised method based on contrastive learning and unsupervised method based on reconstruction with semantic latent space can significantly improve the performance of anomalous trajectory detection in their corresponding settings over various baseline methods.

Anomaly Detection Autonomous Driving +3

Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction

no code implementations27 May 2022 Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu

In addition, experiments show that our method can significantly improve the system's robust generalization to unseen patterns of attacks.

Adversarial Robustness Decision Making +2

Towards Driving-Oriented Metric for Lane Detection Models

1 code implementation CVPR 2022 Takami Sato, Qi Alfred Chen

After the 2017 TuSimple Lane Detection Challenge, its dataset and evaluation based on accuracy and F1 score have become the de facto standard to measure the performance of lane detection methods.

Autonomous Driving Lane Detection

Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models

no code implementations13 Sep 2021 Won Park, Nan Li, Qi Alfred Chen, Z. Morley Mao

A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs.

3D Object Detection Autonomous Vehicles +3

On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving

no code implementations6 Jul 2021 Takami Sato, Qi Alfred Chen

We demonstrate that the conventional evaluation fails to reflect the robustness in end-to-end autonomous driving scenarios.

Autonomous Driving Lane Detection

End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering

no code implementations27 Feb 2021 Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen, Qi Zhu

The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.

Autonomous Driving

On Adversarial Robustness of 3D Point Cloud Classification under Adaptive Attacks

no code implementations24 Nov 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

Since adversarial training (AT) is believed as the most robust defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the 3D model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

On The Adversarial Robustness of 3D Point Cloud Classification

no code implementations28 Sep 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Zhuoqing Mao

Since adversarial training (AT) is believed to be the most effective defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

no code implementations14 Sep 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical.

Lane Detection

Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures

no code implementations30 Jun 2020 Jiachen Sun, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks.

Autonomous Driving Self-Driving Cars

Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack

no code implementations3 Mar 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.

Adversarial Attack

Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

1 code implementation ICLR 2020 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving BIG-bench Machine Learning +2

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

1 code implementation27 May 2019 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Cannot find the paper you are looking for? You can Submit a new open access paper.