Search Results for author: Jian-hai Chen

Found 5 papers, 1 papers with code

TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples

no code implementations23 Jan 2020 Ya-guan Qian, Xi-Ming Zhang, Wassim Swaileh, Li Wei, Bin Wang, Jian-hai Chen, Wu-jie Zhou, Jing-sheng Lei

Although Deep Neural Networks(DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples. Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as solving a saddle point problem that minimizes risk and maximizes perturbation. Therefore, powerful adversarial examples can effectively replicate the situation of perturbation maximization to solve the saddle point problem. The method proposed in this paper approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples.

Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolutional Neural Networks

no code implementations27 Oct 2019 Ya-guan Qian, Dan-feng Ma, Bin Wang, Jun Pan, Jia-min Wang, Jian-hai Chen, Wu-jie Zhou, Jing-sheng Lei

In this paper, we propose an evasion attack on CNN classifiers in the context of License Plate Recognition (LPR), which adds predetermined perturbations to specific regions of license plate images, simulating some sort of naturally formed spots (such as sludge, etc.).

License Plate Recognition

A Truthful FPTAS Mechanism for Emergency Demand Response in Colocation Data Centers

1 code implementation10 Jan 2019 Jian-hai Chen, Deshi Ye, Shouling Ji, Qinming He, Yang Xiang, Zhenguang Liu

Next, we prove that our mechanism is an FPTAS, i. e., it can be approximated within $1 + \epsilon$ for any given $\epsilon > 0$, while the running time of our mechanism is polynomial in $n$ and $1/\epsilon$, where $n$ is the number of tenants in the datacenter.

Computer Science and Game Theory

V-Fuzz: Vulnerability-Oriented Evolutionary Fuzzing

no code implementations4 Jan 2019 Yuwei Li, Shouling Ji, Chenyang Lv, Yu-An Chen, Jian-hai Chen, Qinchen Gu, Chunming Wu

Given a binary program to V-Fuzz, the vulnerability prediction model will give a prior estimation on which parts of the software are more likely to be vulnerable.

Cryptography and Security

SmartSeed: Smart Seed Generation for Efficient Fuzzing

no code implementations7 Jul 2018 Chenyang Lyu, Shouling Ji, Yuwei Li, Junfeng Zhou, Jian-hai Chen, Jing Chen

In total, our system discovers more than twice unique crashes and 5, 040 extra unique paths than the existing best seed selection strategy for the evaluated 12 applications.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.