Deep learning algorithms have increasingly been shown to lack robustness to
simple adversarial examples (AdvX). An equally troubling observation is that
these adversarial examples transfer between different architectures trained on
different datasets. We investigate the transferability of adversarial examples
between models using the angle between the input-output Jacobians of different
models. To demonstrate the relevance of this approach, we perform case studies
that involve jointly training pairs of models. These case studies empirically
justify the theoretical intuitions for why the angle between gradients is a
fundamental quantity in AdvX transferability. Furthermore, we consider the
asymmetry of AdvX transferability between two models of the same architecture
and explain it in terms of differences in gradient norms between the models.
Lastly, we provide a simple modification to existing training setups that
reduces transferability of adversarial examples between pairs of models.