Search Results for author: Tianyu Pang

Found 29 papers, 22 papers with code

$O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks

no code implementations26 May 2022 Tianyu Pang, Shuicheng Yan, Min Lin

In this paper, we substitute the Slater determinant with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N^2)$.

Variational Monte Carlo

Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior

1 code implementation13 Mar 2022 Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu

However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

no code implementations9 Mar 2022 Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.

3D FACE MODELING Face Recognition

Robustness and Accuracy Could Be Reconcilable by (Proper) Definition

1 code implementation21 Feb 2022 Tianyu Pang, Min Lin, Xiao Yang, Jun Zhu, Shuicheng Yan

The trade-off between robustness and accuracy has been widely studied in the adversarial literature.

Inductive Bias

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

no code implementations13 Oct 2021 Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu

The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.

Adversarial Robustness

Accumulative Poisoning Attacks on Real-time Data

1 code implementation NeurIPS 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.

Federated Learning online learning

Exploring Memorization in Adversarial Training

1 code implementation ICLR 2022 Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu

In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart

1 code implementation CVPR 2022 Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu

Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.

Black-box Detection of Backdoor Attacks with Limited Information and Data

no code implementations ICCV 2021 Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.

Bag of Tricks for Adversarial Training

2 code implementations ICLR 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Adversarial training (AT) is one of the most effective strategies for promoting model robustness.

Adversarial Robustness

Efficient Learning of Generative Models via Finite-Difference Score Matching

1 code implementation NeurIPS 2020 Tianyu Pang, Kun Xu, Chongxuan Li, Yang song, Stefano Ermon, Jun Zhu

Several machine learning applications involve the optimization of higher-order derivatives (e. g., gradients of gradients) during training, which can be expensive in respect to memory and computation even with automatic differentiation.

Towards Face Encryption by Generating Adversarial Identity Masks

1 code implementation ICCV 2021 Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue

As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.

Face Recognition

Boosting Adversarial Training with Hypersphere Embedding

1 code implementation NeurIPS 2020 Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su

Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.

Representation Learning

Adversarial Distributional Training for Robust Deep Learning

1 code implementation NeurIPS 2020 Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu

Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.

Benchmarking Adversarial Robustness

no code implementations26 Dec 2019 Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu

Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.

Adversarial Attack Adversarial Robustness +1

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

1 code implementation ICLR 2020 Tianyu Pang, Kun Xu, Jun Zhu

Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.

Adversarial Robustness

Improving Black-box Adversarial Attacks with a Transfer-based Prior

2 code implementations NeurIPS 2019 Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

2 code implementations ICLR 2020 Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu

Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e. g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.

Adversarial Robustness

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

1 code implementation CVPR 2019 Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.


Improving Adversarial Robustness via Promoting Ensemble Diversity

6 code implementations25 Jan 2019 Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu

Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks.

Adversarial Robustness

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

Max-Mahalanobis Linear Discriminant Analysis Networks

2 code implementations ICML 2018 Tianyu Pang, Chao Du, Jun Zhu

In this paper, we show that a properly designed classifier can improve robustness to adversarial attacks and lead to better prediction results.

Boosting Adversarial Attacks with Momentum

5 code implementations CVPR 2018 Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li

To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.

Adversarial Attack

Towards Robust Detection of Adversarial Examples

1 code implementation NeurIPS 2018 Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu

Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.