1 code implementation • 5 Dec 2023 • Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu
Concretely, by estimating a transition matrix that captures the probability of one class being confused with another, an instruction containing a correct exemplar and an erroneous one from the most probable noisy class can be constructed.
1 code implementation • 20 Nov 2023 • Yu Tian, Xiao Yang, Jingyuan Zhang, Yinpeng Dong, Hang Su
3) the detection of the produced improper responses is more challenging.
1 code implementation • 21 Sep 2023 • Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu
By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability.
1 code implementation • 5 Sep 2023 • Haixu Song, Shiyu Huang, Yinpeng Dong, Wei-Wei Tu
The rise of deepfake images, especially of well-known personalities, poses a serious threat to the dissemination of authentic information.
no code implementations • ICCV 2023 • Yikai Wang, Yinpeng Dong, Fuchun Sun, Xiao Yang
The key idea of our method, Root Pose Decomposition (RPD), is to maintain a per-frame root pose transformation, meanwhile building a dense field with local transformations to rectify the root pose.
no code implementations • 21 Jul 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.
no code implementations • ICCV 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object.
1 code implementation • 28 Jun 2023 • Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su
In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones.
no code implementations • 16 Jun 2023 • Hongcheng Gao, Hao Zhang, Yinpeng Dong, Zhijie Deng
Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions.
no code implementations • 15 Jun 2023 • Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, Xingxing Wei
Adversarial attacks, particularly patch attacks, pose significant threats to the robustness and reliability of deep learning models.
2 code implementations • 24 May 2023 • Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu
Since our method does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats.
1 code implementation • 7 May 2023 • Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, Hang Su
To gain a better understanding of the training process and potential risks of text-to-image synthesis, we perform a systematic investigation of backdoor attack on text-to-image diffusion models and propose BadT2I, a general multimodal backdoor attack framework that tampers with image synthesis in diverse semantic levels.
1 code implementation • CVPR 2023 • Zijian Zhu, Yichi Zhang, Hai Chen, Yinpeng Dong, Shu Zhao, Wenbo Ding, Jiachen Zhong, Shibao Zheng
However, there still lacks a systematic understanding of the robustness of these vision-dependent BEV models, which is closely related to the safety of autonomous driving systems.
1 code implementation • CVPR 2023 • Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning Chen, Hang Su, Jun Zhu
The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
no code implementations • CVPR 2023 • Yikai Wang, Wenbing Huang, Yinpeng Dong, Fuchun Sun, Anbang Yao
Binary Neural Network (BNN) represents convolution weights with 1-bit values, which enhances the efficiency of storage and computation.
no code implementations • 20 Mar 2023 • Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
3D object detection is an important task in autonomous driving to perceive the surroundings.
2 code implementations • 16 Mar 2023 • Huanran Chen, Yichi Zhang, Yinpeng Dong, Jun Zhu
Transfer-based adversarial attacks attract tremendous attention as they can identify the weaknesses of deep learning models in a black-box manner.
no code implementations • 28 Feb 2023 • Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, Shibao Zheng
In our benchmark, we evaluate the robustness of 55 typical deep learning models on ImageNet with diverse architectures (e. g., CNNs, Transformers) and learning algorithms (e. g., normal supervised training, pre-training, adversarial training) under numerous adversarial attacks and out-of-distribution (OOD) datasets.
2 code implementations • 28 Feb 2023 • Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, Jun Zhu
However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.
1 code implementation • CVPR 2023 • Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
3D object detection is an important task in autonomous driving to perceive the surroundings.
no code implementations • 7 Dec 2022 • Yinpeng Dong, Peng Chen, Senyou Deng, Lianji L, Yi Sun, Hanyu Zhao, Jiaxing Li, Yunteng Tan, Xinyu Liu, Yangyi Dong, Enhui Xu, Jincai Xu, Shu Xu, Xuelin Fu, Changfeng Sun, Haoliang Han, Xuchong Zhang, Shen Chen, Zhimin Sun, Junyi Cao, Taiping Yao, Shouhong Ding, Yu Wu, Jian Lin, Tianpeng Wu, Ye Wang, Yu Fu, Lin Feng, Kangkang Gao, Zeyu Liu, Yuanzhe Pang, Chengqi Duan, Huipeng Zhou, Yajie Wang, Yuhang Zhao, Shangbo Wu, Haoran Lyu, Zhiyu Lin, YiFei Gao, Shuang Li, Haonan Wang, Jitao Sang, Chen Ma, Junhao Zheng, Yijia Li, Chao Shen, Chenhao Lin, Zhichao Cui, Guoshuai Liu, Huafeng Shi, Kun Hu, Mengxin Zhang
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems.
no code implementations • 2 Nov 2022 • Jinali Zhang, Yinpeng Dong, Jun Zhu, Jihong Zhu, Minchi Kuang, Xiaming Yuan
Extensive experiments show that the SS attack proposed in this paper can be seamlessly combined with the existing state-of-the-art (SOTA) 3D point cloud attack methods to form more powerful attack methods, and the SS attack improves the transferability over 3. 6 times compare to the baseline.
no code implementations • 27 Oct 2022 • Yibo Miao, Yinpeng Dong, Jun Zhu, Xiao-Shan Gao
For naturalness, we constrain the adversarial example to be $\epsilon$-isometric to the original one by adopting the Gaussian curvature as a surrogate metric guaranteed by a theoretical analysis.
1 code implementation • 8 Oct 2022 • Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu
Recent studies have demonstrated that visual recognition models lack robustness to distribution shift.
1 code implementation • 7 Oct 2022 • Yuanhao Ban, Yinpeng Dong
Self-supervised pre-training has drawn increasing attention in recent years due to its superior performance on numerous downstream tasks after fine-tuning.
no code implementations • 9 Jun 2022 • Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
no code implementations • 3 Jun 2022 • Xiaoyi Chen, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, Zhonghai Wu
Although Deep Neural Network (DNN) has led to unprecedented progress in various natural language processing (NLP) tasks, research shows that deep models are extremely vulnerable to backdoor attacks.
no code implementations • 28 May 2022 • Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, Jun Zhou
We propose four kinds of backdoor attacks for object detection task: 1) Object Generation Attack: a trigger can falsely generate an object of the target class; 2) Regional Misclassification Attack: a trigger can change the prediction of a surrounding object to the target class; 3) Global Misclassification Attack: a single trigger can change the predictions of all objects in an image to the target class; and 4) Object Disappearance Attack: a trigger can make the detector fail to detect the object of the target class.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • 13 Mar 2022 • Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu
However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.
no code implementations • 9 Mar 2022 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 13 Oct 2021 • Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu
The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.
1 code implementation • ICML Workshop AML 2021 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting.
no code implementations • CVPR 2021 • Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, Jun Zhu
However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models.
1 code implementation • NeurIPS 2021 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.
1 code implementation • ICLR 2022 • Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.
1 code implementation • CVPR 2022 • Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu
Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.
no code implementations • 9 May 2021 • Qi-An Fu, Yinpeng Dong, Hang Su, Jun Zhu
Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples.
1 code implementation • 7 Apr 2021 • Jinlai Zhang, Yinpeng Dong, Binbin Liu, Bo Ouyang, Jihong Zhu, Minchi Kuang, Houqing Wang, Yanmei Meng
To this end, in this paper, we propose a novel adversarial defense for 3D point cloud classifier that makes full use of the nature of the DNNs.
no code implementations • ICCV 2021 • Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.
1 code implementation • NeurIPS 2020 • Zhijie Deng, Yinpeng Dong, Shifeng Zhang, Jun Zhu
In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.
2 code implementations • ICLR 2021 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
Adversarial training (AT) is one of the most effective strategies for promoting model robustness.
no code implementations • 28 Sep 2020 • Zhijie Deng, Xiao Yang, Hao Zhang, Yinpeng Dong, Jun Zhu
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates.
2 code implementations • 8 Jul 2020 • Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models.
1 code implementation • CVPR 2020 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.
1 code implementation • ICCV 2021 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue
As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.
1 code implementation • NeurIPS 2020 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su
Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.
1 code implementation • NeurIPS 2020 • Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
no code implementations • 26 Dec 2019 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.
2 code implementations • NeurIPS 2019 • Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.
2 code implementations • ICLR 2020 • Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu
Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e. g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.
1 code implementation • CVPR 2019 • Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu
In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.
1 code implementation • CVPR 2019 • Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.
no code implementations • 25 Feb 2019 • Zhijie Deng, Yinpeng Dong, Jun Zhu
We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs).
no code implementations • 25 Jan 2019 • Yinpeng Dong, Fan Bao, Hang Su, Jun Zhu
3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.
no code implementations • 16 Nov 2018 • You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu
Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts.
no code implementations • CVPR 2018 • Zhou Su, Chen Zhu, Yinpeng Dong, Dongqi Cai, Yurong Chen, Jianguo Li
Second is the mechanism for handling multiple knowledge facts expanding from question and answer pairs.
1 code implementation • 31 Mar 2018 • Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe
To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.
2 code implementations • CVPR 2018 • Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, Jun Zhu
First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks.
6 code implementations • CVPR 2018 • Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li
To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.
no code implementations • 18 Aug 2017 • Yinpeng Dong, Hang Su, Jun Zhu, Fan Bao
We find that: (1) the neurons in DNNs do not truly detect semantic objects/parts, but respond to objects/parts only as recurrent discriminative patches; (2) deep visual representations are not robust distributed codes of visual concepts because the representations of adversarial images are largely not consistent with those of real images, although they have similar visual appearance, both of which are different from previous findings.
1 code implementation • 3 Aug 2017 • Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su
This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs.
1 code implementation • NeurIPS 2018 • Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu
Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples.
no code implementations • CVPR 2017 • Yinpeng Dong, Hang Su, Jun Zhu, Bo Zhang
Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems.
no code implementations • 14 Nov 2016 • Yujie Qian, Yinpeng Dong, Ye Ma, Hailong Jin, Juanzi Li
Measuring research impact and ranking academic achievement are important and challenging problems.
13 code implementations • 3 Oct 2016 • Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, Patrick McDaniel
An adversarial example library for constructing attacks, building defenses, and benchmarking both