Search Results for author: Yinpeng Dong

Found 41 papers, 21 papers with code

Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior

1 code implementation13 Mar 2022 Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu

However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

no code implementations9 Mar 2022 Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.

3D FACE MODELING Face Recognition

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

no code implementations13 Oct 2021 Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu

The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.

Adversarial Robustness

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

no code implementations29 Sep 2021 Hao Zhongkai, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu

The vulnerability of deep learning models to adversarial examples and semantic transformations has limited the applications in risk-sensitive areas.

Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

no code implementations CVPR 2021 Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, Jun Zhu

However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models.

Face Recognition

Accumulative Poisoning Attacks on Real-time Data

1 code implementation NeurIPS 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.

Federated Learning online learning

Exploring Memorization in Adversarial Training

1 code implementation ICLR 2022 Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu

In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart

1 code implementation31 May 2021 Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu

Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.

Automated Decision-based Adversarial Attacks

no code implementations9 May 2021 Qi-An Fu, Yinpeng Dong, Hang Su, Jun Zhu

Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples.

Adversarial Attack Program Synthesis

Black-box Detection of Backdoor Attacks with Limited Information and Data

no code implementations ICCV 2021 Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.

Understanding and Exploring the Network with Stochastic Architectures

no code implementations NeurIPS 2020 Zhijie Deng, Yinpeng Dong, Shifeng Zhang, Jun Zhu

In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.

Neural Architecture Search

BayesAdapter: Being Bayesian, Inexpensively and Reliably, via Bayesian Fine-tuning

1 code implementation5 Oct 2020 Zhijie Deng, Hao Zhang, Xiao Yang, Yinpeng Dong, Jun Zhu

Despite their theoretical appealingness, Bayesian neural networks (BNNs) are left behind in real-world adoption, due to persistent concerns on their scalability, accessibility, and reliability.

Variational Inference

Bag of Tricks for Adversarial Training

2 code implementations ICLR 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Adversarial training (AT) is one of the most effective strategies for promoting model robustness.

Adversarial Robustness

BayesAdapter: Being Bayesian, Inexpensively and Robustly, via Bayesian Fine-tuning

no code implementations28 Sep 2020 Zhijie Deng, Xiao Yang, Hao Zhang, Yinpeng Dong, Jun Zhu

Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates.

Variational Inference

RobFR: Benchmarking Adversarial Robustness on Face Recognition

2 code implementations8 Jul 2020 Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu

Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models.

Adversarial Robustness Face Recognition

Benchmarking Adversarial Robustness on Image Classification

no code implementations CVPR 2020 Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu

Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.

Adversarial Attack Adversarial Robustness +3

Towards Face Encryption by Generating Adversarial Identity Masks

1 code implementation ICCV 2021 Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue

As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.

Face Recognition

Boosting Adversarial Training with Hypersphere Embedding

1 code implementation NeurIPS 2020 Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su

Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.

Representation Learning

Adversarial Distributional Training for Robust Deep Learning

1 code implementation NeurIPS 2020 Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu

Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.

Benchmarking Adversarial Robustness

no code implementations26 Dec 2019 Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu

Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.

Adversarial Attack Adversarial Robustness +1

Improving Black-box Adversarial Attacks with a Transfer-based Prior

2 code implementations NeurIPS 2019 Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

2 code implementations ICLR 2020 Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu

Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e. g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.

Adversarial Robustness

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

no code implementations CVPR 2019 Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu

In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.

Face Recognition

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

1 code implementation CVPR 2019 Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.

Translation

Batch Virtual Adversarial Training for Graph Convolutional Networks

no code implementations25 Feb 2019 Zhijie Deng, Yinpeng Dong, Jun Zhu

We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs).

General Classification Node Classification

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

no code implementations25 Jan 2019 Yinpeng Dong, Fan Bao, Hang Su, Jun Zhu

3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.

Composite Binary Decomposition Networks

no code implementations16 Nov 2018 You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu

Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts.

General Classification Image Classification +2

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

Boosting Adversarial Attacks with Momentum

5 code implementations CVPR 2018 Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li

To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.

Adversarial Attack

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

no code implementations18 Aug 2017 Yinpeng Dong, Hang Su, Jun Zhu, Fan Bao

We find that: (1) the neurons in DNNs do not truly detect semantic objects/parts, but respond to objects/parts only as recurrent discriminative patches; (2) deep visual representations are not robust distributed codes of visual concepts because the representations of adversarial images are largely not consistent with those of real images, although they have similar visual appearance, both of which are different from previous findings.

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization

1 code implementation3 Aug 2017 Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su

This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs.

Quantization

Towards Robust Detection of Adversarial Examples

1 code implementation NeurIPS 2018 Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu

Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples.

Improving Interpretability of Deep Neural Networks with Semantic Information

no code implementations CVPR 2017 Yinpeng Dong, Hang Su, Jun Zhu, Bo Zhang

Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems.

Action Recognition Video Captioning

Cannot find the paper you are looking for? You can Submit a new open access paper.