Search Results for author: Wenchang Shi

Found 3 papers, 1 papers with code

Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction

2 code implementations23 May 2017 Bin Liang, Hongcheng Li, Miaoqiang Su, Xirong Li, Wenchang Shi, Xiao-Feng Wang

Consequently, the adversarial example can be effectively detected by comparing the classification results of a given sample and its denoised version, without referring to any prior knowledge of attacks.

Quantization

Deep Text Classification Can be Fooled

no code implementations26 Apr 2017 Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi

In this paper, we present an effective method to craft text adversarial samples, revealing one important yet underestimated fact that DNN-based text classifiers are also prone to adversarial sample attack.

General Classification text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.