Denoised Smoothing is a method for obtaining a provably robust classifier from a fixed pretrained one, without any additional training or fine-tuning of the latter. The basic idea is to prepend a custom-trained denoiser before the pretrained classifier, and then apply randomized smoothing. Randomized smoothing is a certified defense that converts any given classifier $f$ into a new smoothed classifier $g$ that is characterized by a non-linear Lipschitz property. When queried at a point $x$, the smoothed classifier $g$ outputs the class that is most likely to be returned by $f$ under isotropic Gaussian perturbations of its inputs. Unfortunately, randomized smoothing requires that the underlying classifier $f$ is robust to relatively large random Gaussian perturbations of the input, which is not the case for off-the-shelf pretrained models. By applying our custom-trained denoiser to the classifier $f$, we can effectively make $f$ robust to such Gaussian perturbations, thereby making it “suitable” for randomized smoothing.
Source: Denoised Smoothing: A Provable Defense for Pretrained ClassifiersPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Reconstruction | 2 | 25.00% |
Adversarial Robustness | 2 | 25.00% |
Image Classification | 2 | 25.00% |
Denoising | 1 | 12.50% |
General Classification | 1 | 12.50% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |