FLOAT: FAST LEARNABLE ONCE-FOR-ALL ADVERSARIAL TRAINING FOR TUNABLE TRADE-OFF BETWEEN ACCURACY AND ROBUSTNESS

29 Sep 2021  ·  Souvik Kundu, Peter Anthony Beerel, Sairam Sundaresan ·

Training a model that can be robust against adversarially-perturbed images with-out compromising accuracy on clean-images has proven to be challenging. Recent research has tried to resolve this issue by incorporating an additional layer after each batch-normalization layer in a network, that implements feature-wise linear modulation (FiLM). These extra layers enable in-situ calibration of a trained model, allowing the user to configure the desired priority between robustness and clean-image performance after deployment. However, these extra layers significantly increase training time, parameter count, and add latency which can prove costly for time or memory constrained applications. In this paper, we present Fast Learnable Once-for-all Adversarial Training (FLOAT) which transforms the weight tensors without using extra layers, thereby incurring no significant increase in parameter count, training time, or network latency compared to a standard adversarial training. In particular, we add configurable scaled noise to the weight tensors that enables a ‘continuous’ trade-off between clean and adversarial performance. Additionally, we extend FLOAT to slimmable neural networks to enable a three-way in-situ trade-off between robustness, accuracy, and complexity. Extensive experiments show that FLOAT can yield state-of-the-art performance improving both clean and perturbed image classification by up to ∼6.5% and ∼14.5%, respectively, while requiring up to 1.47x fewer parameters with similar hyperparameter settings compared to FiLM-based alternatives.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here