Manifold Distance Judge, an Adversarial Samples Defense Strategy Based on Service Orchestration

29 Sep 2021  ·  Mengxin Zhang, Xiaofeng QIU ·

Deep neural networks (DNNs) are playing an increasingly significant role in the modern world. However, they are weak to adversarial examples that are generated by adding specially crafted perturbations. Most defenses against adversarial examples focused on refining the DNN models, which often sacrifice the performance and computational cost of models on benign samples. In this paper, we propose a manifold distance detection method to distinguish between legitimate samples and adversarial samples by measuring the different distances on the manifold. The manifold distance detection method neither modifies the protected models nor requires knowledge of the process for generating adversarial samples. Inspired by the effectiveness of the manifold distance detection, we demonstrated a well-designed orchestrated defense strategy, named Manifold Distance Judge (MDJ), which selects the best image processing method that will effectively expand the manifold distance between legitimate and adversarial samples, and thus, enhances the performance of the following manifold distance detection method. Tests on the ImageNet dataset, the MDJ is effective against the most adversarial samples under whitebox, graybox, and blackbox attack scenarios. We show empirically that the orchestration strategy MDJ is significantly better than Feature Squeezing on the recall rate. Meanwhile, MDJ achieves high detection rates against CW attack and DI-FGSM attack.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here