Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques

4 Jul 2022  ·  Yannik Mahlau, Christian Nolde ·

In recent years, an abundance of feature attribution methods for explaining neural networks have been developed. Especially in the field of computer vision, many methods for generating saliency maps providing pixel attributions exist. However, their explanations often contradict each other and it is not clear which explanation to trust. A natural solution to this problem is the aggregation of multiple explanations. We present and compare different pixel-based aggregation schemes with the goal of generating a new explanation, whose fidelity to the model's decision is higher than each individual explanation. Using methods from the field of Bayesian Optimization, we incorporate the variance between the individual explanations into the aggregation process. Additionally, we analyze the effect of multiple normalization techniques on ensemble aggregation.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here