no code implementations • 23 Jan 2020 • Ya-guan Qian, Xi-Ming Zhang, Wassim Swaileh, Li Wei, Bin Wang, Jian-hai Chen, Wu-jie Zhou, Jing-sheng Lei
Although Deep Neural Networks(DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples. Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as solving a saddle point problem that minimizes risk and maximizes perturbation. Therefore, powerful adversarial examples can effectively replicate the situation of perturbation maximization to solve the saddle point problem. The method proposed in this paper approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples.
no code implementations • 27 Oct 2019 • Ya-guan Qian, Dan-feng Ma, Bin Wang, Jun Pan, Jia-min Wang, Jian-hai Chen, Wu-jie Zhou, Jing-sheng Lei
In this paper, we propose an evasion attack on CNN classifiers in the context of License Plate Recognition (LPR), which adds predetermined perturbations to specific regions of license plate images, simulating some sort of naturally formed spots (such as sludge, etc.).
1 code implementation • 10 Jan 2019 • Jian-hai Chen, Deshi Ye, Shouling Ji, Qinming He, Yang Xiang, Zhenguang Liu
Next, we prove that our mechanism is an FPTAS, i. e., it can be approximated within $1 + \epsilon$ for any given $\epsilon > 0$, while the running time of our mechanism is polynomial in $n$ and $1/\epsilon$, where $n$ is the number of tenants in the datacenter.
Computer Science and Game Theory
no code implementations • 4 Jan 2019 • Yuwei Li, Shouling Ji, Chenyang Lv, Yu-An Chen, Jian-hai Chen, Qinchen Gu, Chunming Wu
Given a binary program to V-Fuzz, the vulnerability prediction model will give a prior estimation on which parts of the software are more likely to be vulnerable.
Cryptography and Security
no code implementations • 7 Jul 2018 • Chenyang Lyu, Shouling Ji, Yuwei Li, Junfeng Zhou, Jian-hai Chen, Jing Chen
In total, our system discovers more than twice unique crashes and 5, 040 extra unique paths than the existing best seed selection strategy for the evaluated 12 applications.
Cryptography and Security