PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving

CVPR 2020  ·  Zelun Kong, Junfeng Guo, Ang Li, Cong Liu ·

Although Deep neural networks (DNNs) are being pervasively used in vision-based autonomous driving systems, they are found vulnerable to adversarial attacks where small-magnitude perturbations into the inputs during test time cause dramatic changes to the outputs. While most of the recent attack methods target at digital-world adversarial scenarios, it is unclear how they perform in the physical world, and more importantly, the generated perturbations under such methods would cover a whole driving scene including those fixed background imagery such as the sky, making them inapplicable to physical world implementation. We present PhysGAN, which generates physical-world-resilient adversarial examples for mislead-ing autonomous driving systems in a continuous manner. We show the effectiveness and robustness of PhysGAN via extensive digital and real-world evaluations. Digital experiments show that PhysGAN is effective for various steer-ing models and scenes, which misleads the average steer-ing angle by up to 23.06 degrees under various scenarios. The real-world studies further demonstrate that PhysGAN is sufficiently resilient in practice, which misleads the average steering angle by up to 19.17 degrees. We compare PhysGAN with a set of state-of-the-art baseline methods including several of our self-designed ones, which further demonstrate the robustness and efficacy of our approach. We also show that PhysGAN outperforms state-of-the-art baseline methods To the best of our knowledge, PhysGANis probably the first technique of generating realistic and physical-world-resilient adversarial examples for attacking common autonomous driving scenarios.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here