Explanation Fidelity Evaluation

6 papers with code • 6 benchmarks • 6 datasets

Evaluation of explanation fidelity with respect to the underlying model.

Most implemented papers

EXPLAN: Explaining Black-box Classifiers using Adaptive Neighborhood Generation

peymanrasouli/EXPLAN 2020 International Joint Conference on Neural Networks (IJCNN) 2020

Defining a representative locality is an urgent challenge in perturbation-based explanation methods, which influences the fidelity and soundness of explanations.

Developing a Fidelity Evaluation Approach for Interpretable Machine Learning

Mythreyi-V/three-phase-fidelity-evaluation 16 Jun 2021

Although modern machine learning and deep learning methods allow for complex and in-depth data analytics, the predictive models generated by these methods are often highly complex, and lack transparency.

Towards Better Understanding Attribution Methods

sukrutrao/attribution-evaluation CVPR 2022

Finally, we propose a post-processing smoothing step that significantly improves the performance of some attribution methods, and discuss its applicability.

Can local explanation techniques explain linear additive models?

amir-rahnama/can_local_explanations_explain_lam Data Mining and Knowledge Discovery 2023

Local model-agnostic additive explanation techniques decompose the predicted output of a black-box model into additive feature importance scores.

SAME: Uncovering GNN Black Box with Structure-aware Shapley-based Multipiece Explanations

same2023neurips/same NeurIPS 2023

Post-hoc explanation techniques on graph neural networks (GNNs) provide economical solutions for opening the black-box graph models without model retraining.