Search Results for author: Won Park

Found 6 papers, 2 papers with code

Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free Backdoor Removal via Stabilized Model Inversion

no code implementations14 Jun 2022 Si Chen, Yi Zeng, Jiachen T. Wang, Won Park, Xun Chen, Lingjuan Lyu, Zhuoqing Mao, Ruoxi Jia

Our work is the first to provide a thorough understanding of leveraging model inversion for effective backdoor removal by addressing key questions about reconstructed samples' properties, perceptual similarity, and the potential presence of backdoor triggers.

Adversarial Unlearning of Backdoors via Implicit Hypergradient

3 code implementations ICLR 2022 Yi Zeng, Si Chen, Won Park, Z. Morley Mao, Ming Jin, Ruoxi Jia

Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size.

Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models

no code implementations13 Sep 2021 Won Park, Nan Li, Qi Alfred Chen, Z. Morley Mao

A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs.

3D Object Detection Autonomous Vehicles +3

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

1 code implementation ICCV 2021 Yi Zeng, Won Park, Z. Morley Mao, Ruoxi Jia

Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.

Minority Reports Defense: Defending Against Adversarial Patches

no code implementations28 Apr 2020 Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, David Wagner

We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch.

Adversarial Attack General Classification +1

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.