Browse > Methodology > Feature Importance

Feature Importance

42 papers with code · Methodology

Leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Latest papers with code

Explainability and Adversarial Robustness for RNNs

20 Dec 2019CN-TU/adversarial-recurrent-ids

Recurrent Neural Networks (RNNs) yield attractive properties for constructing Intrusion Detection Systems (IDSs) for network data.

FEATURE IMPORTANCE INTRUSION DETECTION

3
20 Dec 2019

A Benchmark for Interpretability Methods in Deep Neural Networks

NeurIPS 2019 google-research/google-research

We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.

FEATURE IMPORTANCE IMAGE CLASSIFICATION

9,009
01 Dec 2019

Towards Automatic Concept-based Explanations

NeurIPS 2019 amiratag/ACE

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.

FEATURE IMPORTANCE

61
01 Dec 2019

CXPlain: Causal Explanations for Model Interpretation under Uncertainty

NeurIPS 2019 d909b/cxplain

Feature importance estimates that inform users about the degree to which given inputs influence the output of a predictive model are crucial for understanding, validating, and interpreting machine-learning models.

FEATURE IMPORTANCE

55
01 Dec 2019

Sobolev Independence Criterion

NeurIPS 2019 IBM/SIC

We propose the Sobolev Independence Criterion (SIC), an interpretable dependency measure between a high dimensional random variable X and a response variable Y. SIC decomposes to the sum of feature importance scores and hence can be used for nonlinear feature selection.

FEATURE IMPORTANCE FEATURE SELECTION

9
01 Dec 2019

A Debiased MDI Feature Importance Measure for Random Forests

NeurIPS 2019 shifwang/paper-debiased-feature-importance

Based on the original definition of MDI by Breiman et al. \cite{Breiman1984} for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones.

FEATURE IMPORTANCE FEATURE SELECTION

0
01 Dec 2019

Sobolev Independence Criterion

NeurIPS 2019 IBM/SIC

In the kernel version we show that SIC can be cast as a convex optimization problem by introducing auxiliary variables that play an important role in feature selection as they are normalized feature importance scores.

FEATURE IMPORTANCE FEATURE SELECTION

9
31 Oct 2019

CXPlain: Causal Explanations for Model Interpretation under Uncertainty

NeurIPS 2019 d909b/cxplain

Feature importance estimates that inform users about the degree to which given inputs influence the output of a predictive model are crucial for understanding, validating, and interpreting machine-learning models.

FEATURE IMPORTANCE

55
27 Oct 2019

Self-Attention for Raw Optical Satellite Time Series Classification

23 Oct 2019marccoru/crop-type-mapping

In this work, we embed self-attention in the canon of deep learning mechanisms for satellite time series classification for vegetation modeling and crop type identification.

FEATURE IMPORTANCE TIME SERIES TIME SERIES CLASSIFICATION

18
23 Oct 2019

Decision Explanation and Feature Importance for Invertible Networks

30 Sep 2019juntang-zhuang/explain_invertible

We can determine the decision boundary of a linear classifier in the feature space; since the transform is invertible, we can invert the decision boundary from the feature space to the input space.

FEATURE IMPORTANCE

7
30 Sep 2019