TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples

Although Deep Neural Networks(DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples.Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as solving a saddle point problem that minimizes risk and maximizes perturbation.Therefore, powerful adversarial examples can effectively replicate the situation of perturbation maximization to solve the saddle point problem.The method proposed in this paper approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples. If it is used for adversarial training, the DNNs can be effectively regularized and the defects of the model can be improved.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here