Search Results for author: Dou Goodman

Found 6 papers, 3 papers with code

FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications

no code implementations31 Jan 2020 Dou Goodman, Lv Zhonghou, Wang minghua

In this paper, we present a novel algorithm, FastWordBug, to efficiently generate small text perturbations in a black-box setting that forces a sentiment analysis or text classification mode to make an incorrect prediction.

Adversarial Text General Classification +3

Advbox: a toolbox to generate adversarial examples that fool neural networks

2 code implementations13 Jan 2020 Dou Goodman, Hao Xin, Wang Yang, Wu Yuesheng, Xiong Junfeng, Zhang Huan

In recent years, neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance.

BIG-bench Machine Learning Face Recognition +1

Transferability of Adversarial Examples to Attack Cloud-based Image Classifier Service

1 code implementation8 Jan 2020 Dou Goodman

Fortunately, generating adversarial examples usually requires white-box access to the victim model, and real-world cloud-based image classification services are more complex than white-box classifier, the architecture and parameters of DL models on cloud platforms cannot be obtained by the attacker.

Classification General Classification +1

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

no code implementations23 Aug 2019 Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei

Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.

Adversarial Robustness

Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

no code implementations19 Jun 2019 Dou Goodman, Tao Wei

Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples. Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.