About

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Datasets

Greatest papers with code

A Benchmark for Interpretability Methods in Deep Neural Networks

NeurIPS 2019 google-research/google-research

We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.

FEATURE IMPORTANCE IMAGE CLASSIFICATION

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

Distributed and parallel time series feature extraction for industrial big data applications

25 Oct 2016blue-yonder/tsfresh

This problem is especially hard to solve for time series classification and regression in industrial applications such as predictive maintenance or production line optimization, for which each label or regression target is associated with several time series and meta-information simultaneously.

CLASSIFICATION FEATURE IMPORTANCE FEATURE SELECTION TIME SERIES TIME SERIES CLASSIFICATION

FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction

23 May 2019shenweichen/DeepCTR

In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions.

CLICK-THROUGH RATE PREDICTION FEATURE IMPORTANCE

Attention is not Explanation

NAACL 2019 jessevig/bertviz

Attention mechanisms have seen wide adoption in neural NLP models.

FEATURE IMPORTANCE

Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End

10 Nov 2020interpretml/DiCE

These feature attributions convey how important a feature is to changing the classification outcome of a model, especially on whether a subset of features is necessary and/or sufficient for that change, which feature attribution methods are unable to provide.

CAUSAL INFERENCE FEATURE IMPORTANCE

Interpretable machine learning: definitions, methods, and applications

14 Jan 2019csinva/imodels

Official code for using / reproducing ACD (ICLR 2019) from the paper "Hierarchical interpretations for neural network predictions" https://arxiv. org/abs/1806. 05337

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders

1 Dec 2020dcmocanu/sparse-evolutionary-artificial-neural-networks

This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance.

DENOISING FEATURE IMPORTANCE FEATURE SELECTION

Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations

8 Apr 2019koalaverse/vip

Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.

FEATURE IMPORTANCE

Hierarchical interpretations for neural network predictions

ICLR 2019 csinva/hierarchical-dnn-interpretations

Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING