Enhance the Visual Representation via Discrete Adversarial Training

16 Sep 2022  ·  Xiaofeng Mao, Yuefeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, Shaokai Ye, Xiaodan Li, Rong Zhang, Hui Xue ·

Adversarial Training (AT), which is commonly accepted as one of the most effective approaches defending against adversarial examples, can largely harm the standard performance, thus has limited usefulness on industrial-scale production and applications. Surprisingly, this phenomenon is totally opposite in Natural Language Processing (NLP) task, where AT can even benefit for generalization. We notice the merit of AT in NLP tasks could derive from the discrete and symbolic input space. For borrowing the advantage from NLP-style AT, we propose Discrete Adversarial Training (DAT). DAT leverages VQGAN to reform the image data to discrete text-like inputs, i.e. visual words. Then it minimizes the maximal risk on such discrete images with symbolic adversarial perturbations. We further give an explanation from the perspective of distribution to demonstrate the effectiveness of DAT. As a plug-and-play technique for enhancing the visual representation, DAT achieves significant improvement on multiple tasks including image classification, object detection and self-supervised learning. Especially, the model pre-trained with Masked Auto-Encoding (MAE) and fine-tuned by our DAT without extra data can get 31.40 mCE on ImageNet-C and 32.77% top-1 accuracy on Stylized-ImageNet, building the new state-of-the-art. The code will be available at https://github.com/alibaba/easyrobust.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification ImageNet MAE+DAT (ViT-H) Top 1 Accuracy 87.02% # 110
Domain Generalization ImageNet-A MAE+DAT (ViT-H) Top-1 accuracy % 68.92 # 11
Domain Generalization ImageNet-C MAE+DAT (ViT-H) mean Corruption Error (mCE) 31.4 # 3
Number of params 632M # 41
Domain Generalization ImageNet-R MAE+DAT (ViT-H) Top-1 Error Rate 34.39 # 12
Domain Generalization ImageNet-Sketch MAE+DAT (ViT-H) Top-1 accuracy 50.03 # 11
Domain Generalization Stylized-ImageNet MAE+DAT (ViT-H) Top 1 Accuracy 32.77 # 1

Methods


No methods listed for this paper. Add relevant methods here