Given the great success of Deep Neural Networks(DNNs) and the black-box nature of it, the interpretability of these models becomes an important issue. The majority of previous research works on the post-hoc interpretation of a trained model. But recently, adversarial training shows that it is possible for a model to have an interpretable input-gradient through training. However, adversarial training lacks efficiency for interpretability. To resolve this problem, we construct an approximation of the adversarial perturbations and discover a connection between adversarial training and amplitude modulation.
It has been long debated that eXplainable AI (XAI) is an important topic, but it lacks rigorous definition and fair metrics.
Dams impact downstream river dynamics through flow regulation and disruption of upstream-downstream linkages.