A Unified framework for randomized smoothing based certified defenses

25 Sep 2019  ·  Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu ·

Randomized smoothing, which was recently proved to be a certified defensive technique, has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions still remain unanswered in the existing frameworks, such as (i) whether Gaussian mechanism is an optimal choice for certifying $\ell_2$-normed robustness, and (ii) whether randomized smoothing can certify $\ell_\infty$-normed robustness (on high-dimensional datasets like ImageNet). To answer these questions, we introduce a {\em unified} and {\em self-contained} framework to study randomized smoothing-based certified defenses, where we mainly focus on the two most popular norms in adversarial machine learning, {\em i.e.,} $\ell_2$ and $\ell_\infty$ norm. We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness. We further show that the largest $\ell_\infty$ radius certified by randomized smoothing is upper bounded by $O(1/\sqrt{d})$, where $d$ is the dimensionality of the data. This theoretical finding suggests that certifying $\ell_\infty$-normed robustness by randomized smoothing may not be scalable to high-dimensional data. The veracity of our framework and analysis is verified by extensive evaluations on CIFAR10 and ImageNet.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods