SmoothGrad: removing noise by adding noise

12 Jun 2017slundberg/shap

Explaining the output of a deep network remains a challenge.

INTERPRETABLE MACHINE LEARNING

Learning Important Features Through Propagating Activation Differences

ICML 2017 slundberg/shap

The purported "black box"' nature of neural networks is a barrier to adoption in applications where interpretability is essential.

INTERPRETABLE MACHINE LEARNING

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

Consistent Individualized Feature Attribution for Tree Ensembles

12 Feb 2018slundberg/shap

A unified approach to explain the output of any machine learning model.

Cannot find the paper you are looking for? You can Submit a new open access paper.