Search Results for author: Qi Alfred Chen

Found 24 papers, 7 papers with code

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

1 code implementation27 May 2019 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

1 code implementation ICLR 2020 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Towards Driving-Oriented Metric for Lane Detection Models

1 code implementation CVPR 2022 Takami Sato, Qi Alfred Chen

After the 2017 TuSimple Lane Detection Challenge, its dataset and evaluation based on accuracy and F1 score have become the de facto standard to measure the performance of lane detection methods.

Autonomous Driving Lane Detection

On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures

1 code implementation22 Sep 2023 Qingzhao Zhang, Shuowei Jin, Ruiyang Zhu, Jiachen Sun, Xumiao Zhang, Qi Alfred Chen, Z. Morley Mao

To understand the impact of the vulnerability, we break the ground by proposing various real-time data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results, leading to hard brakes or increased collision risks.

Anomaly Detection Autonomous Vehicles

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving BIG-bench Machine Learning +2

Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack

no code implementations3 Mar 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.

Adversarial Attack

Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures

no code implementations30 Jun 2020 Jiachen Sun, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks.

Autonomous Driving Self-Driving Cars

Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

no code implementations14 Sep 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical.

Lane Detection

On Adversarial Robustness of 3D Point Cloud Classification under Adaptive Attacks

no code implementations24 Nov 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

Since adversarial training (AT) is believed as the most robust defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the 3D model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering

no code implementations27 Feb 2021 Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen, Qi Zhu

The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.

Autonomous Driving

On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving

no code implementations6 Jul 2021 Takami Sato, Qi Alfred Chen

We demonstrate that the conventional evaluation fails to reflect the robustness in end-to-end autonomous driving scenarios.

Autonomous Driving Lane Detection

Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models

no code implementations13 Sep 2021 Won Park, Nan Li, Qi Alfred Chen, Z. Morley Mao

A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs.

3D Object Detection Autonomous Vehicles +3

On The Adversarial Robustness of 3D Point Cloud Classification

no code implementations28 Sep 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Zhuoqing Mao

Since adversarial training (AT) is believed to be the most effective defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction

no code implementations27 May 2022 Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu

In addition, experiments show that our method can significantly improve the system's robust generalization to unseen patterns of attacks.

Adversarial Robustness Decision Making +1

Learning Representation for Anomaly Detection of Vehicle Trajectories

no code implementations9 Mar 2023 Ruochen Jiao, Juyang Bai, Xiangguo Liu, Takami Sato, Xiaowei Yuan, Qi Alfred Chen, Qi Zhu

We conduct extensive experiments to demonstrate that our supervised method based on contrastive learning and unsupervised method based on reconstruction with semantic latent space can significantly improve the performance of anomalous trajectory detection in their corresponding settings over various baseline methods.

Anomaly Detection Autonomous Driving +3

LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken Assumptions, and New Attack Strategies

no code implementations19 Mar 2023 Takami Sato, Yuki Hayakawa, Ryo Suzuki, Yohsuke Shiiki, Kentaro Yoshioka, Qi Alfred Chen

To fill these critical research gaps, we conduct the first large-scale measurement study on LiDAR spoofing attack capabilities on object detectors with 9 popular LiDARs, covering both first- and new-generation LiDARs, and 3 major types of object detectors trained on 5 different datasets.

Autonomous Driving Object +2

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

no code implementations ICCV 2023 Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen

In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity.

Autonomous Driving

Intriguing Properties of Diffusion Models: A Large-Scale Dataset for Evaluating Natural Attack Capability in Text-to-Image Generative Models

no code implementations30 Aug 2023 Takami Sato, Justin Yue, Nanze Chen, Ningfei Wang, Qi Alfred Chen

Motivated by the finding, we construct a large-scale dataset, Natural Denoising Diffusion Attack (NDDA) dataset, to systematically evaluate the risk of the natural attack capability of diffusion models with state-of-the-art text-to-image diffusion models.

Denoising Image Generation

SlowTrack: Increasing the Latency of Camera-based Perception in Autonomous Driving Using Adversarial Examples

no code implementations15 Dec 2023 Chen Ma, Ningfei Wang, Qi Alfred Chen, Chao Shen

Our evaluation results show that the system-level effects can be significantly improved, i. e., the vehicle crash rate of SlowTrack is around 95% on average while existing works only have around 30%.

Autonomous Driving object-detection +1

Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic Sign Perception

no code implementations7 Jan 2024 Takami Sato, Sri Hrushikesh Varma Bhupathiraju, Michael Clifford, Takeshi Sugawara, Qi Alfred Chen, Sara Rampazzi

We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras.

Traffic Sign Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.