Robustness Methods

Denoised Smoothing

Introduced by Salman et al. in Denoised Smoothing: A Provable Defense for Pretrained Classifiers

Denoised Smoothing is a method for obtaining a provably robust classifier from a fixed pretrained one, without any additional training or fine-tuning of the latter. The basic idea is to prepend a custom-trained denoiser before the pretrained classifier, and then apply randomized smoothing. Randomized smoothing is a certified defense that converts any given classifier $f$ into a new smoothed classifier $g$ that is characterized by a non-linear Lipschitz property. When queried at a point $x$, the smoothed classifier $g$ outputs the class that is most likely to be returned by $f$ under isotropic Gaussian perturbations of its inputs. Unfortunately, randomized smoothing requires that the underlying classifier $f$ is robust to relatively large random Gaussian perturbations of the input, which is not the case for off-the-shelf pretrained models. By applying our custom-trained denoiser to the classifier $f$, we can effectively make $f$ robust to such Gaussian perturbations, thereby making it “suitable” for randomized smoothing.

Source: Denoised Smoothing: A Provable Defense for Pretrained Classifiers

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Reconstruction 2 25.00%
Adversarial Robustness 2 25.00%
Image Classification 2 25.00%
Denoising 1 12.50%
General Classification 1 12.50%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories