Search Results for author: Z. Morley Mao

Found 11 papers, 4 papers with code

PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition

no code implementations21 Aug 2022 Jiachen Sun, Weili Nie, Zhiding Yu, Z. Morley Mao, Chaowei Xiao

3D Point cloud is becoming a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.

Autonomous Driving

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines

no code implementations1 Dec 2021 Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao

To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.

Adversarial Robustness Data Augmentation

Adversarial Unlearning of Backdoors via Implicit Hypergradient

2 code implementations ICLR 2022 Yi Zeng, Si Chen, Won Park, Z. Morley Mao, Ming Jin, Ruoxi Jia

Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size.

Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models

no code implementations13 Sep 2021 Won Park, Nan Li, Qi Alfred Chen, Z. Morley Mao

A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs.

3D Object Detection Autonomous Vehicles +1

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

1 code implementation ICCV 2021 Yi Zeng, Won Park, Z. Morley Mao, Ruoxi Jia

Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.

On Adversarial Robustness of 3D Point Cloud Classification under Adaptive Attacks

no code implementations24 Nov 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

Since adversarial training (AT) is believed as the most robust defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the 3D model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures

no code implementations30 Jun 2020 Jiachen Sun, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks.

Autonomous Driving Self-Driving Cars

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.