Search Results for author: Wu-jie Zhou

Found 2 papers, 0 papers with code

TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples

no code implementations23 Jan 2020 Ya-guan Qian, Xi-Ming Zhang, Wassim Swaileh, Li Wei, Bin Wang, Jian-hai Chen, Wu-jie Zhou, Jing-sheng Lei

Although Deep Neural Networks(DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples. Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as solving a saddle point problem that minimizes risk and maximizes perturbation. Therefore, powerful adversarial examples can effectively replicate the situation of perturbation maximization to solve the saddle point problem. The method proposed in this paper approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples.

Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks

no code implementations27 Oct 2019 Ya-guan Qian, Dan-feng Ma, Bin Wang, Jun Pan, Jia-min Wang, Jian-hai Chen, Wu-jie Zhou, Jing-sheng Lei

In this paper, we propose an evasion attack on CNN classifiers in the context of License Plate Recognition (LPR), which adds predetermined perturbations to specific regions of license plate images, simulating some sort of naturally formed spots (such as sludge, etc.).

License Plate Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.