Robust Assessment of Real-World Adversarial Examples

24 Nov 2019  ·  Brett Jefferson, Carlos Ortiz Marrero ·

We explore rigorous, systematic, and controlled experimental evaluation of adversarial examples in the real world and propose a testing regimen for evaluation of real world adversarial objects. We show that for small scene/ environmental perturbations, large adversarial performance differences exist. Current state of adversarial reporting exists largely as a frequency count over a dynamic collections of scenes. Our work underscores the need for either a more complete report or a score that incorporates scene changes and baseline performance for models and environments tested by adversarial developers. We put forth a score that attempts to address the above issues in a straight-forward exemplar application for multiple generated adversary examples. We contribute the following: 1. a testbed for adversarial assessment, 2. a score for adversarial examples, and 3. a collection of additional evaluations on testbed data.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here