Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network"
A recent paper by Liu et al. combines the topics of adversarial training and Bayesian Neural Networks (BNN) and suggests that adversarially trained BNNs are more robust against adversarial attacks than their non-Bayesian counterparts. Here, I analyze the proposed defense and suggest that one needs to adjust the adversarial attack to incorporate the stochastic nature of a Bayesian network to perform an accurate evaluation of its robustness. Using this new type of attack I show that there appears to be no strong evidence for higher robustness of the adversarially trained BNNs.
PDF Abstract