Learning Contextual Perturbation Budgets for Training Robust Neural Networks

1 Jan 2021  ·  Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang ·

Existing methods for training robust neural networks generally try to make the models uniformly robust on all input dimensions. However, the importance of different input dimensions is different and should be context-dependent. In this paper, we propose a novel framework to train robust models with non-uniform perturbation budgets on different input dimensions. We insert a perturbation budget generator to generate perturbation budgets, and incorporate it into certified defense. We evaluate our method on MNIST and CIFAR-10 datasets and show that we can achieve lower errors under the same "robustness volume", compared to methods using uniform perturbation budgets. We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here