|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Recent works show that deep neural networks trained on image classification dataset bias towards textures.
A recent paper by Liu et al. combines the topics of adversarial training and Bayesian Neural Networks (BNN) and suggests that adversarially trained BNNs are more robust against adversarial attacks than their non-Bayesian counterparts.
While great progress has been made at making neural networks effective across a wide range of visual tasks, most models are surprisingly vulnerable.
Deep neural network (DNN) predictions have been shown to be vulnerable to carefully crafted adversarial perturbations.
Recently, researchers have started decomposing deep neural network models according to their semantics or functions.
This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs.
We propose an adversarial defense method that achieves state-of-the-art performance among attack-agnostic adversarial defense methods while also maintaining robustness to input resolution, scale of adversarial perturbation, and scale of dataset size.
Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.