Search Results for author: Chengjin Sun

Found 6 papers, 1 papers with code

Type I Attack for Generative Models

no code implementations4 Mar 2020 Chengjin Sun, Sizhe Chen, Jia Cai, Xiaolin Huang

To implement the Type I attack, we destroy the original one by increasing the distance in input space while keeping the output similar because different inputs may correspond to similar features for the property of deep neural network.

Vocal Bursts Type Prediction

Double Backpropagation for Training Autoencoders against Adversarial Attack

no code implementations4 Mar 2020 Chengjin Sun, Sizhe Chen, Xiaolin Huang

We restrict the gradient from the reconstruction image to the original one so that the autoencoder is not sensitive to trivial perturbation produced by the adversarial attack.

Adversarial Attack Robust classification

HRFA: High-Resolution Feature-based Attack

no code implementations21 Jan 2020 Zhixing Ye, Sizhe Chen, Peidong Zhang, Chengjin Sun, Xiaolin Huang

Adversarial attacks have long been developed for revealing the vulnerability of Deep Neural Networks (DNNs) by adding imperceptible perturbations to the input.

Denoising Face Verification +1

Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

no code implementations16 Jan 2020 Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang

AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.

Adversarial Attack

DAmageNet: A Universal Adversarial Dataset

1 code implementation16 Dec 2019 Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun

Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.

Adversarial Attack

Adversarial Attack Type I: Cheat Classifiers by Significant Changes

no code implementations3 Sep 2018 Sanli Tang, Xiaolin Huang, Mingjian Chen, Chengjin Sun, Jie Yang

Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations.

Adversarial Attack Vocal Bursts Type Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.