Browse > Methodology > Interpretable Machine Learning

Interpretable Machine Learning

22 papers with code · Methodology

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 Feb 2016marcotcr/lime

Despite widespread adoption, machine learning models remain mostly black boxes.

IMAGE CLASSIFICATION INTERPRETABLE MACHINE LEARNING

SmoothGrad: removing noise by adding noise

12 Jun 2017slundberg/shap

Explaining the output of a deep network remains a challenge.

INTERPRETABLE MACHINE LEARNING

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

Learning Important Features Through Propagating Activation Differences

ICML 2017 slundberg/shap

The purported "black box"' nature of neural networks is a barrier to adoption in applications where interpretability is essential.

INTERPRETABLE MACHINE LEARNING

Understanding Neural Networks Through Deep Visualization

22 Jun 2015yosinski/deep-visualization-toolbox

The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e. g. a live webcam stream).

INTERPRETABLE MACHINE LEARNING

Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

ICCV 2017 jacobgil/pytorch-grad-cam

We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent.

IMAGE CLASSIFICATION INTERPRETABLE MACHINE LEARNING VISUAL QUESTION ANSWERING

iNNvestigate neural networks!

13 Aug 2018albermax/innvestigate

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

INTERPRETABLE MACHINE LEARNING

The (Un)reliability of saliency methods

ICLR 2018 albermax/innvestigate

Saliency methods aim to explain the predictions of deep neural networks.

INTERPRETABLE MACHINE LEARNING

Interpretable Explanations of Black Boxes by Meaningful Perturbation

ICCV 2017 jacobgil/pytorch-explain-black-box

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions.

INTERPRETABLE MACHINE LEARNING