Browse > Methodology > Interpretable Machine Learning

Interpretable Machine Learning

30 papers with code · Methodology

Leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

SmoothGrad: removing noise by adding noise

12 Jun 2017slundberg/shap

Explaining the output of a deep network remains a challenge.

INTERPRETABLE MACHINE LEARNING

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

Learning Important Features Through Propagating Activation Differences

ICML 2017 slundberg/shap

Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.

INTERPRETABLE MACHINE LEARNING

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 Feb 2016marcotcr/lime

Despite widespread adoption, machine learning models remain mostly black boxes.

IMAGE CLASSIFICATION INTERPRETABLE MACHINE LEARNING

Understanding Neural Networks Through Deep Visualization

22 Jun 2015yosinski/deep-visualization-toolbox

The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e. g. a live webcam stream).

INTERPRETABLE MACHINE LEARNING

iNNvestigate neural networks!

13 Aug 2018albermax/innvestigate

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

INTERPRETABLE MACHINE LEARNING

The (Un)reliability of saliency methods

ICLR 2018 albermax/innvestigate

Saliency methods aim to explain the predictions of deep neural networks.

INTERPRETABLE MACHINE LEARNING

Interpretable Explanations of Black Boxes by Meaningful Perturbation

ICCV 2017 jacobgil/pytorch-explain-black-box

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions.

INTERPRETABLE MACHINE LEARNING