no code implementations • 1 Feb 2020 • Zifei Zhang, Kai Qiao, Lingyun Jiang, Linyuan Wang, Bin Yan
To alleviate the tradeoff between the attack success rate and image fidelity, we propose a method named AdvJND, adding visual model coefficients, just noticeable difference coefficients, in the constraint of a distortion function when generating adversarial examples.
no code implementations • 17 Sep 2019 • Wanting Yu, Hongyi Yu, Lingyun Jiang, Mengli Zhang, Kai Qiao
The proposed model comprising a texture transfer network (TTN) and an auxiliary defense generative adversarial networks (GAN) is called Human-perception Auxiliary Defense GAN (HAD-GAN).
no code implementations • 12 Apr 2019 • Lingyun Jiang, Kai Qiao, Ruoxi Qin, Linyuan Wang, Jian Chen, Haibing Bu, Bin Yan
In image classification of deep learning, adversarial examples where inputs intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them.