SmoothLRP: Smoothing Explanations of Neural Network Decisions by Averaging over Stochastic Input Variations

1 Jan 2021  ·  Arne Peter Raulf, Ben Luis Hack, Sina Däubener, Axel Mosig, Asja Fischer ·

With the excessive use of neural networks in safety critical domains the need for understandable explanations of their predictions is rising. Several methods were developed which identify the most relevant inputs, such as sensitivity analysis and most prominently layerwise relevance propagation (LRP). It has been shown that the noise in the explanations from the sensitivity analysis can be heavily reduced by averaging over noisy versions of the input image, a method referred to as SmoothGrad. We investigate the application of the same principle to LRP and find that it smooths the resulting relevance function leading to improved explanations for state-of-the-art LRP rules. The method, that we refer to as SmoothLRP, even produces good explanations on poorly trained neural networks, where former methods show unsatisfactory results. Interestingly, we observed, that SmoothLRP can also be applied to the identification of adversarial examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here